Test Report: KVM_Linux_crio 20400

                    
                      62166c5b3d4846dcb8bdc6cf847b2364ca5b5915:2025-02-11:38304
                    
                

Test fail (9/327)

x
+
TestAddons/parallel/Ingress (154.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-046133 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-046133 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-046133 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8907db89-70f9-4576-b1e0-7316d1a91e4e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8907db89-70f9-4576-b1e0-7316d1a91e4e] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.004062615s
I0211 02:04:59.333097   19645 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-046133 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.37462218s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-046133 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.211
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-046133 -n addons-046133
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-046133 logs -n 25: (1.156959552s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-869004                                                                     | download-only-869004 | jenkins | v1.35.0 | 11 Feb 25 02:02 UTC | 11 Feb 25 02:02 UTC |
	| delete  | -p download-only-521523                                                                     | download-only-521523 | jenkins | v1.35.0 | 11 Feb 25 02:02 UTC | 11 Feb 25 02:02 UTC |
	| delete  | -p download-only-869004                                                                     | download-only-869004 | jenkins | v1.35.0 | 11 Feb 25 02:02 UTC | 11 Feb 25 02:02 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-945729 | jenkins | v1.35.0 | 11 Feb 25 02:02 UTC |                     |
	|         | binary-mirror-945729                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40719                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-945729                                                                     | binary-mirror-945729 | jenkins | v1.35.0 | 11 Feb 25 02:02 UTC | 11 Feb 25 02:02 UTC |
	| addons  | enable dashboard -p                                                                         | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:02 UTC |                     |
	|         | addons-046133                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:02 UTC |                     |
	|         | addons-046133                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-046133 --wait=true                                                                | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:02 UTC | 11 Feb 25 02:04 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-046133 addons disable                                                                | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-046133 addons disable                                                                | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-046133 addons disable                                                                | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-046133 addons                                                                        | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-046133 addons                                                                        | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-046133 ip                                                                            | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	| addons  | addons-046133 addons disable                                                                | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-046133 addons                                                                        | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC | 11 Feb 25 02:04 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-046133 ssh curl -s                                                                   | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:04 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-046133 addons                                                                        | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:05 UTC | 11 Feb 25 02:05 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:05 UTC | 11 Feb 25 02:05 UTC |
	|         | -p addons-046133                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-046133 ssh cat                                                                       | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:05 UTC | 11 Feb 25 02:05 UTC |
	|         | /opt/local-path-provisioner/pvc-cc30bfbf-dfc2-43dd-a5a7-18400646de0d_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-046133 addons disable                                                                | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:05 UTC | 11 Feb 25 02:05 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-046133 addons disable                                                                | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:05 UTC | 11 Feb 25 02:05 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-046133 addons                                                                        | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:05 UTC | 11 Feb 25 02:05 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-046133 addons                                                                        | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:05 UTC | 11 Feb 25 02:05 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-046133 ip                                                                            | addons-046133        | jenkins | v1.35.0 | 11 Feb 25 02:07 UTC | 11 Feb 25 02:07 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 02:02:03
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 02:02:03.737920   20276 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:02:03.738014   20276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:02:03.738021   20276 out.go:358] Setting ErrFile to fd 2...
	I0211 02:02:03.738025   20276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:02:03.738208   20276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 02:02:03.738787   20276 out.go:352] Setting JSON to false
	I0211 02:02:03.739603   20276 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2675,"bootTime":1739236649,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:02:03.739698   20276 start.go:139] virtualization: kvm guest
	I0211 02:02:03.741717   20276 out.go:177] * [addons-046133] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:02:03.743046   20276 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 02:02:03.743080   20276 notify.go:220] Checking for updates...
	I0211 02:02:03.745624   20276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:02:03.746726   20276 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 02:02:03.747834   20276 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 02:02:03.748880   20276 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 02:02:03.749858   20276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 02:02:03.751109   20276 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:02:03.782963   20276 out.go:177] * Using the kvm2 driver based on user configuration
	I0211 02:02:03.784014   20276 start.go:297] selected driver: kvm2
	I0211 02:02:03.784027   20276 start.go:901] validating driver "kvm2" against <nil>
	I0211 02:02:03.784039   20276 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:02:03.784729   20276 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 02:02:03.784819   20276 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20400-12456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0211 02:02:03.799609   20276 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0211 02:02:03.799661   20276 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0211 02:02:03.799881   20276 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 02:02:03.799909   20276 cni.go:84] Creating CNI manager for ""
	I0211 02:02:03.799953   20276 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 02:02:03.799963   20276 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0211 02:02:03.800012   20276 start.go:340] cluster config:
	{Name:addons-046133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-046133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPause
Interval:1m0s}
	I0211 02:02:03.800094   20276 iso.go:125] acquiring lock: {Name:mkf866c6e52b4efa55cc59a9f329105471716f9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 02:02:03.801657   20276 out.go:177] * Starting "addons-046133" primary control-plane node in "addons-046133" cluster
	I0211 02:02:03.802843   20276 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:02:03.802886   20276 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0211 02:02:03.802899   20276 cache.go:56] Caching tarball of preloaded images
	I0211 02:02:03.802976   20276 preload.go:172] Found /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0211 02:02:03.802986   20276 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0211 02:02:03.803261   20276 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/config.json ...
	I0211 02:02:03.803283   20276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/config.json: {Name:mk767f25c36444748f0dc623ed6901b4b163c41b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:03.803420   20276 start.go:360] acquireMachinesLock for addons-046133: {Name:mk0cbf79bfabdb28d0a301765db34c154a72eff0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0211 02:02:03.803464   20276 start.go:364] duration metric: took 31.744µs to acquireMachinesLock for "addons-046133"
	I0211 02:02:03.803483   20276 start.go:93] Provisioning new machine with config: &{Name:addons-046133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-046133 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 02:02:03.803539   20276 start.go:125] createHost starting for "" (driver="kvm2")
	I0211 02:02:03.805658   20276 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0211 02:02:03.805772   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:03.805814   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:03.820277   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44789
	I0211 02:02:03.820715   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:03.821315   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:03.821337   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:03.821668   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:03.821842   20276 main.go:141] libmachine: (addons-046133) Calling .GetMachineName
	I0211 02:02:03.821990   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:03.822115   20276 start.go:159] libmachine.API.Create for "addons-046133" (driver="kvm2")
	I0211 02:02:03.822138   20276 client.go:168] LocalClient.Create starting
	I0211 02:02:03.822174   20276 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem
	I0211 02:02:04.137460   20276 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem
	I0211 02:02:04.313950   20276 main.go:141] libmachine: Running pre-create checks...
	I0211 02:02:04.313974   20276 main.go:141] libmachine: (addons-046133) Calling .PreCreateCheck
	I0211 02:02:04.314443   20276 main.go:141] libmachine: (addons-046133) Calling .GetConfigRaw
	I0211 02:02:04.314945   20276 main.go:141] libmachine: Creating machine...
	I0211 02:02:04.314961   20276 main.go:141] libmachine: (addons-046133) Calling .Create
	I0211 02:02:04.315118   20276 main.go:141] libmachine: (addons-046133) creating KVM machine...
	I0211 02:02:04.315143   20276 main.go:141] libmachine: (addons-046133) creating network...
	I0211 02:02:04.316319   20276 main.go:141] libmachine: (addons-046133) DBG | found existing default KVM network
	I0211 02:02:04.316971   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:04.316825   20298 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0211 02:02:04.316997   20276 main.go:141] libmachine: (addons-046133) DBG | created network xml: 
	I0211 02:02:04.317054   20276 main.go:141] libmachine: (addons-046133) DBG | <network>
	I0211 02:02:04.317093   20276 main.go:141] libmachine: (addons-046133) DBG |   <name>mk-addons-046133</name>
	I0211 02:02:04.317103   20276 main.go:141] libmachine: (addons-046133) DBG |   <dns enable='no'/>
	I0211 02:02:04.317110   20276 main.go:141] libmachine: (addons-046133) DBG |   
	I0211 02:02:04.317121   20276 main.go:141] libmachine: (addons-046133) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0211 02:02:04.317131   20276 main.go:141] libmachine: (addons-046133) DBG |     <dhcp>
	I0211 02:02:04.317140   20276 main.go:141] libmachine: (addons-046133) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0211 02:02:04.317146   20276 main.go:141] libmachine: (addons-046133) DBG |     </dhcp>
	I0211 02:02:04.317157   20276 main.go:141] libmachine: (addons-046133) DBG |   </ip>
	I0211 02:02:04.317162   20276 main.go:141] libmachine: (addons-046133) DBG |   
	I0211 02:02:04.317205   20276 main.go:141] libmachine: (addons-046133) DBG | </network>
	I0211 02:02:04.317231   20276 main.go:141] libmachine: (addons-046133) DBG | 
	I0211 02:02:04.322218   20276 main.go:141] libmachine: (addons-046133) DBG | trying to create private KVM network mk-addons-046133 192.168.39.0/24...
	I0211 02:02:04.382208   20276 main.go:141] libmachine: (addons-046133) DBG | private KVM network mk-addons-046133 192.168.39.0/24 created
	I0211 02:02:04.382253   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:04.382156   20298 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 02:02:04.382267   20276 main.go:141] libmachine: (addons-046133) setting up store path in /home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133 ...
	I0211 02:02:04.382315   20276 main.go:141] libmachine: (addons-046133) building disk image from file:///home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0211 02:02:04.382344   20276 main.go:141] libmachine: (addons-046133) Downloading /home/jenkins/minikube-integration/20400-12456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0211 02:02:04.648491   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:04.648355   20298 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa...
	I0211 02:02:04.905301   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:04.905166   20298 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/addons-046133.rawdisk...
	I0211 02:02:04.905330   20276 main.go:141] libmachine: (addons-046133) DBG | Writing magic tar header
	I0211 02:02:04.905341   20276 main.go:141] libmachine: (addons-046133) DBG | Writing SSH key tar header
	I0211 02:02:04.905348   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:04.905284   20298 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133 ...
	I0211 02:02:04.905418   20276 main.go:141] libmachine: (addons-046133) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133
	I0211 02:02:04.905446   20276 main.go:141] libmachine: (addons-046133) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube/machines
	I0211 02:02:04.905460   20276 main.go:141] libmachine: (addons-046133) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133 (perms=drwx------)
	I0211 02:02:04.905474   20276 main.go:141] libmachine: (addons-046133) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube/machines (perms=drwxr-xr-x)
	I0211 02:02:04.905485   20276 main.go:141] libmachine: (addons-046133) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube (perms=drwxr-xr-x)
	I0211 02:02:04.905498   20276 main.go:141] libmachine: (addons-046133) setting executable bit set on /home/jenkins/minikube-integration/20400-12456 (perms=drwxrwxr-x)
	I0211 02:02:04.905504   20276 main.go:141] libmachine: (addons-046133) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0211 02:02:04.905515   20276 main.go:141] libmachine: (addons-046133) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0211 02:02:04.905528   20276 main.go:141] libmachine: (addons-046133) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 02:02:04.905535   20276 main.go:141] libmachine: (addons-046133) creating domain...
	I0211 02:02:04.905546   20276 main.go:141] libmachine: (addons-046133) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456
	I0211 02:02:04.905558   20276 main.go:141] libmachine: (addons-046133) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0211 02:02:04.905570   20276 main.go:141] libmachine: (addons-046133) DBG | checking permissions on dir: /home/jenkins
	I0211 02:02:04.905580   20276 main.go:141] libmachine: (addons-046133) DBG | checking permissions on dir: /home
	I0211 02:02:04.905590   20276 main.go:141] libmachine: (addons-046133) DBG | skipping /home - not owner
	I0211 02:02:04.906486   20276 main.go:141] libmachine: (addons-046133) define libvirt domain using xml: 
	I0211 02:02:04.906530   20276 main.go:141] libmachine: (addons-046133) <domain type='kvm'>
	I0211 02:02:04.906541   20276 main.go:141] libmachine: (addons-046133)   <name>addons-046133</name>
	I0211 02:02:04.906547   20276 main.go:141] libmachine: (addons-046133)   <memory unit='MiB'>4000</memory>
	I0211 02:02:04.906552   20276 main.go:141] libmachine: (addons-046133)   <vcpu>2</vcpu>
	I0211 02:02:04.906556   20276 main.go:141] libmachine: (addons-046133)   <features>
	I0211 02:02:04.906562   20276 main.go:141] libmachine: (addons-046133)     <acpi/>
	I0211 02:02:04.906566   20276 main.go:141] libmachine: (addons-046133)     <apic/>
	I0211 02:02:04.906571   20276 main.go:141] libmachine: (addons-046133)     <pae/>
	I0211 02:02:04.906575   20276 main.go:141] libmachine: (addons-046133)     
	I0211 02:02:04.906579   20276 main.go:141] libmachine: (addons-046133)   </features>
	I0211 02:02:04.906584   20276 main.go:141] libmachine: (addons-046133)   <cpu mode='host-passthrough'>
	I0211 02:02:04.906618   20276 main.go:141] libmachine: (addons-046133)   
	I0211 02:02:04.906639   20276 main.go:141] libmachine: (addons-046133)   </cpu>
	I0211 02:02:04.906651   20276 main.go:141] libmachine: (addons-046133)   <os>
	I0211 02:02:04.906660   20276 main.go:141] libmachine: (addons-046133)     <type>hvm</type>
	I0211 02:02:04.906670   20276 main.go:141] libmachine: (addons-046133)     <boot dev='cdrom'/>
	I0211 02:02:04.906680   20276 main.go:141] libmachine: (addons-046133)     <boot dev='hd'/>
	I0211 02:02:04.906691   20276 main.go:141] libmachine: (addons-046133)     <bootmenu enable='no'/>
	I0211 02:02:04.906700   20276 main.go:141] libmachine: (addons-046133)   </os>
	I0211 02:02:04.906709   20276 main.go:141] libmachine: (addons-046133)   <devices>
	I0211 02:02:04.906725   20276 main.go:141] libmachine: (addons-046133)     <disk type='file' device='cdrom'>
	I0211 02:02:04.906740   20276 main.go:141] libmachine: (addons-046133)       <source file='/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/boot2docker.iso'/>
	I0211 02:02:04.906754   20276 main.go:141] libmachine: (addons-046133)       <target dev='hdc' bus='scsi'/>
	I0211 02:02:04.906764   20276 main.go:141] libmachine: (addons-046133)       <readonly/>
	I0211 02:02:04.906773   20276 main.go:141] libmachine: (addons-046133)     </disk>
	I0211 02:02:04.906784   20276 main.go:141] libmachine: (addons-046133)     <disk type='file' device='disk'>
	I0211 02:02:04.906799   20276 main.go:141] libmachine: (addons-046133)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0211 02:02:04.906816   20276 main.go:141] libmachine: (addons-046133)       <source file='/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/addons-046133.rawdisk'/>
	I0211 02:02:04.906828   20276 main.go:141] libmachine: (addons-046133)       <target dev='hda' bus='virtio'/>
	I0211 02:02:04.906837   20276 main.go:141] libmachine: (addons-046133)     </disk>
	I0211 02:02:04.906848   20276 main.go:141] libmachine: (addons-046133)     <interface type='network'>
	I0211 02:02:04.906861   20276 main.go:141] libmachine: (addons-046133)       <source network='mk-addons-046133'/>
	I0211 02:02:04.906893   20276 main.go:141] libmachine: (addons-046133)       <model type='virtio'/>
	I0211 02:02:04.906904   20276 main.go:141] libmachine: (addons-046133)     </interface>
	I0211 02:02:04.906912   20276 main.go:141] libmachine: (addons-046133)     <interface type='network'>
	I0211 02:02:04.906925   20276 main.go:141] libmachine: (addons-046133)       <source network='default'/>
	I0211 02:02:04.906936   20276 main.go:141] libmachine: (addons-046133)       <model type='virtio'/>
	I0211 02:02:04.906946   20276 main.go:141] libmachine: (addons-046133)     </interface>
	I0211 02:02:04.906961   20276 main.go:141] libmachine: (addons-046133)     <serial type='pty'>
	I0211 02:02:04.906973   20276 main.go:141] libmachine: (addons-046133)       <target port='0'/>
	I0211 02:02:04.906983   20276 main.go:141] libmachine: (addons-046133)     </serial>
	I0211 02:02:04.906999   20276 main.go:141] libmachine: (addons-046133)     <console type='pty'>
	I0211 02:02:04.907009   20276 main.go:141] libmachine: (addons-046133)       <target type='serial' port='0'/>
	I0211 02:02:04.907018   20276 main.go:141] libmachine: (addons-046133)     </console>
	I0211 02:02:04.907031   20276 main.go:141] libmachine: (addons-046133)     <rng model='virtio'>
	I0211 02:02:04.907056   20276 main.go:141] libmachine: (addons-046133)       <backend model='random'>/dev/random</backend>
	I0211 02:02:04.907076   20276 main.go:141] libmachine: (addons-046133)     </rng>
	I0211 02:02:04.907087   20276 main.go:141] libmachine: (addons-046133)     
	I0211 02:02:04.907094   20276 main.go:141] libmachine: (addons-046133)     
	I0211 02:02:04.907101   20276 main.go:141] libmachine: (addons-046133)   </devices>
	I0211 02:02:04.907110   20276 main.go:141] libmachine: (addons-046133) </domain>
	I0211 02:02:04.907135   20276 main.go:141] libmachine: (addons-046133) 
	I0211 02:02:04.913369   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:ad:33:ff in network default
	I0211 02:02:04.913815   20276 main.go:141] libmachine: (addons-046133) starting domain...
	I0211 02:02:04.913832   20276 main.go:141] libmachine: (addons-046133) ensuring networks are active...
	I0211 02:02:04.913839   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:04.914434   20276 main.go:141] libmachine: (addons-046133) Ensuring network default is active
	I0211 02:02:04.914767   20276 main.go:141] libmachine: (addons-046133) Ensuring network mk-addons-046133 is active
	I0211 02:02:04.915336   20276 main.go:141] libmachine: (addons-046133) getting domain XML...
	I0211 02:02:04.915975   20276 main.go:141] libmachine: (addons-046133) creating domain...
	I0211 02:02:06.302416   20276 main.go:141] libmachine: (addons-046133) waiting for IP...
	I0211 02:02:06.303275   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:06.303664   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:06.303718   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:06.303663   20298 retry.go:31] will retry after 259.618214ms: waiting for domain to come up
	I0211 02:02:06.565080   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:06.565497   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:06.565535   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:06.565472   20298 retry.go:31] will retry after 360.763549ms: waiting for domain to come up
	I0211 02:02:06.927978   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:06.928386   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:06.928412   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:06.928360   20298 retry.go:31] will retry after 320.927535ms: waiting for domain to come up
	I0211 02:02:07.250899   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:07.251223   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:07.251246   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:07.251193   20298 retry.go:31] will retry after 494.751587ms: waiting for domain to come up
	I0211 02:02:07.747805   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:07.748151   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:07.748179   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:07.748106   20298 retry.go:31] will retry after 520.416639ms: waiting for domain to come up
	I0211 02:02:08.269813   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:08.270202   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:08.270223   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:08.270185   20298 retry.go:31] will retry after 748.484119ms: waiting for domain to come up
	I0211 02:02:09.020741   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:09.021203   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:09.021245   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:09.021164   20298 retry.go:31] will retry after 1.073851456s: waiting for domain to come up
	I0211 02:02:10.096289   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:10.096679   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:10.096701   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:10.096663   20298 retry.go:31] will retry after 928.251051ms: waiting for domain to come up
	I0211 02:02:11.026612   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:11.026997   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:11.027025   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:11.026956   20298 retry.go:31] will retry after 1.676251816s: waiting for domain to come up
	I0211 02:02:12.705868   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:12.706244   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:12.706291   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:12.706239   20298 retry.go:31] will retry after 2.255202242s: waiting for domain to come up
	I0211 02:02:14.963237   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:14.963633   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:14.963659   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:14.963612   20298 retry.go:31] will retry after 1.897379875s: waiting for domain to come up
	I0211 02:02:16.863558   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:16.863926   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:16.863950   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:16.863904   20298 retry.go:31] will retry after 3.287580979s: waiting for domain to come up
	I0211 02:02:20.152528   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:20.152847   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:20.152891   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:20.152819   20298 retry.go:31] will retry after 3.030023115s: waiting for domain to come up
	I0211 02:02:23.185868   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:23.186298   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find current IP address of domain addons-046133 in network mk-addons-046133
	I0211 02:02:23.186327   20276 main.go:141] libmachine: (addons-046133) DBG | I0211 02:02:23.186271   20298 retry.go:31] will retry after 3.516257377s: waiting for domain to come up
	I0211 02:02:26.706331   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:26.706726   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has current primary IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:26.706749   20276 main.go:141] libmachine: (addons-046133) found domain IP: 192.168.39.211
	I0211 02:02:26.706773   20276 main.go:141] libmachine: (addons-046133) reserving static IP address...
	I0211 02:02:26.707157   20276 main.go:141] libmachine: (addons-046133) DBG | unable to find host DHCP lease matching {name: "addons-046133", mac: "52:54:00:c7:5c:34", ip: "192.168.39.211"} in network mk-addons-046133
	I0211 02:02:26.775619   20276 main.go:141] libmachine: (addons-046133) reserved static IP address 192.168.39.211 for domain addons-046133
	I0211 02:02:26.775653   20276 main.go:141] libmachine: (addons-046133) DBG | Getting to WaitForSSH function...
	I0211 02:02:26.775670   20276 main.go:141] libmachine: (addons-046133) waiting for SSH...
	I0211 02:02:26.778292   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:26.778640   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:26.778668   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:26.778849   20276 main.go:141] libmachine: (addons-046133) DBG | Using SSH client type: external
	I0211 02:02:26.778889   20276 main.go:141] libmachine: (addons-046133) DBG | Using SSH private key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa (-rw-------)
	I0211 02:02:26.778923   20276 main.go:141] libmachine: (addons-046133) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0211 02:02:26.778937   20276 main.go:141] libmachine: (addons-046133) DBG | About to run SSH command:
	I0211 02:02:26.778950   20276 main.go:141] libmachine: (addons-046133) DBG | exit 0
	I0211 02:02:26.914806   20276 main.go:141] libmachine: (addons-046133) DBG | SSH cmd err, output: <nil>: 
	I0211 02:02:26.915059   20276 main.go:141] libmachine: (addons-046133) KVM machine creation complete
	I0211 02:02:26.915378   20276 main.go:141] libmachine: (addons-046133) Calling .GetConfigRaw
	I0211 02:02:26.915882   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:26.916048   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:26.916216   20276 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0211 02:02:26.916231   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:26.917461   20276 main.go:141] libmachine: Detecting operating system of created instance...
	I0211 02:02:26.917476   20276 main.go:141] libmachine: Waiting for SSH to be available...
	I0211 02:02:26.917483   20276 main.go:141] libmachine: Getting to WaitForSSH function...
	I0211 02:02:26.917491   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:26.919729   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:26.920047   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:26.920069   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:26.920232   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:26.920397   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:26.920520   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:26.920649   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:26.920824   20276 main.go:141] libmachine: Using SSH client type: native
	I0211 02:02:26.921027   20276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0211 02:02:26.921038   20276 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0211 02:02:27.029817   20276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 02:02:27.029837   20276 main.go:141] libmachine: Detecting the provisioner...
	I0211 02:02:27.029844   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:27.032262   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.032521   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:27.032550   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.032662   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:27.032845   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:27.032979   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:27.033071   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:27.033179   20276 main.go:141] libmachine: Using SSH client type: native
	I0211 02:02:27.033361   20276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0211 02:02:27.033372   20276 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0211 02:02:27.142949   20276 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0211 02:02:27.142991   20276 main.go:141] libmachine: found compatible host: buildroot
	I0211 02:02:27.142998   20276 main.go:141] libmachine: Provisioning with buildroot...
	I0211 02:02:27.143005   20276 main.go:141] libmachine: (addons-046133) Calling .GetMachineName
	I0211 02:02:27.143229   20276 buildroot.go:166] provisioning hostname "addons-046133"
	I0211 02:02:27.143259   20276 main.go:141] libmachine: (addons-046133) Calling .GetMachineName
	I0211 02:02:27.143406   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:27.145728   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.146074   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:27.146102   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.146210   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:27.146395   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:27.146517   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:27.146664   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:27.146806   20276 main.go:141] libmachine: Using SSH client type: native
	I0211 02:02:27.146992   20276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0211 02:02:27.147005   20276 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-046133 && echo "addons-046133" | sudo tee /etc/hostname
	I0211 02:02:27.267820   20276 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-046133
	
	I0211 02:02:27.267856   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:27.270346   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.270687   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:27.270714   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.270848   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:27.271021   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:27.271170   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:27.271263   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:27.271399   20276 main.go:141] libmachine: Using SSH client type: native
	I0211 02:02:27.271546   20276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0211 02:02:27.271561   20276 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-046133' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-046133/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-046133' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0211 02:02:27.387193   20276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 02:02:27.387232   20276 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20400-12456/.minikube CaCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20400-12456/.minikube}
	I0211 02:02:27.387284   20276 buildroot.go:174] setting up certificates
	I0211 02:02:27.387300   20276 provision.go:84] configureAuth start
	I0211 02:02:27.387319   20276 main.go:141] libmachine: (addons-046133) Calling .GetMachineName
	I0211 02:02:27.387595   20276 main.go:141] libmachine: (addons-046133) Calling .GetIP
	I0211 02:02:27.390206   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.390574   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:27.390595   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.390735   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:27.392832   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.393096   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:27.393126   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.393254   20276 provision.go:143] copyHostCerts
	I0211 02:02:27.393313   20276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem (1078 bytes)
	I0211 02:02:27.393418   20276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem (1123 bytes)
	I0211 02:02:27.393478   20276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem (1679 bytes)
	I0211 02:02:27.393524   20276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem org=jenkins.addons-046133 san=[127.0.0.1 192.168.39.211 addons-046133 localhost minikube]
	I0211 02:02:27.538859   20276 provision.go:177] copyRemoteCerts
	I0211 02:02:27.538980   20276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0211 02:02:27.539010   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:27.541490   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.541767   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:27.541794   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.541936   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:27.542100   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:27.542221   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:27.542378   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:27.628403   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0211 02:02:27.650139   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0211 02:02:27.672313   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0211 02:02:27.694597   20276 provision.go:87] duration metric: took 307.279821ms to configureAuth
	I0211 02:02:27.694628   20276 buildroot.go:189] setting minikube options for container-runtime
	I0211 02:02:27.694807   20276 config.go:182] Loaded profile config "addons-046133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:02:27.694891   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:27.697504   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.697831   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:27.697857   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.698038   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:27.698260   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:27.698450   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:27.698597   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:27.698781   20276 main.go:141] libmachine: Using SSH client type: native
	I0211 02:02:27.698984   20276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0211 02:02:27.698999   20276 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0211 02:02:27.909852   20276 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0211 02:02:27.909879   20276 main.go:141] libmachine: Checking connection to Docker...
	I0211 02:02:27.909887   20276 main.go:141] libmachine: (addons-046133) Calling .GetURL
	I0211 02:02:27.911224   20276 main.go:141] libmachine: (addons-046133) DBG | using libvirt version 6000000
	I0211 02:02:27.913216   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.913539   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:27.913568   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.913699   20276 main.go:141] libmachine: Docker is up and running!
	I0211 02:02:27.913713   20276 main.go:141] libmachine: Reticulating splines...
	I0211 02:02:27.913721   20276 client.go:171] duration metric: took 24.09157622s to LocalClient.Create
	I0211 02:02:27.913752   20276 start.go:167] duration metric: took 24.091634369s to libmachine.API.Create "addons-046133"
	I0211 02:02:27.913773   20276 start.go:293] postStartSetup for "addons-046133" (driver="kvm2")
	I0211 02:02:27.913791   20276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0211 02:02:27.913813   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:27.914056   20276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0211 02:02:27.914082   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:27.916188   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.916508   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:27.916536   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:27.916678   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:27.916833   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:27.916952   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:27.917080   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:28.000669   20276 ssh_runner.go:195] Run: cat /etc/os-release
	I0211 02:02:28.004331   20276 info.go:137] Remote host: Buildroot 2023.02.9
	I0211 02:02:28.004357   20276 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/addons for local assets ...
	I0211 02:02:28.004433   20276 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/files for local assets ...
	I0211 02:02:28.004463   20276 start.go:296] duration metric: took 90.679444ms for postStartSetup
	I0211 02:02:28.004501   20276 main.go:141] libmachine: (addons-046133) Calling .GetConfigRaw
	I0211 02:02:28.005129   20276 main.go:141] libmachine: (addons-046133) Calling .GetIP
	I0211 02:02:28.007453   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:28.007799   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:28.007821   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:28.008064   20276 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/config.json ...
	I0211 02:02:28.008230   20276 start.go:128] duration metric: took 24.204680401s to createHost
	I0211 02:02:28.008250   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:28.010368   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:28.010708   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:28.010743   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:28.010861   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:28.011067   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:28.011201   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:28.011355   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:28.011529   20276 main.go:141] libmachine: Using SSH client type: native
	I0211 02:02:28.011718   20276 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.211 22 <nil> <nil>}
	I0211 02:02:28.011729   20276 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0211 02:02:28.123400   20276 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739239348.097926950
	
	I0211 02:02:28.123430   20276 fix.go:216] guest clock: 1739239348.097926950
	I0211 02:02:28.123442   20276 fix.go:229] Guest: 2025-02-11 02:02:28.09792695 +0000 UTC Remote: 2025-02-11 02:02:28.008241107 +0000 UTC m=+24.306478016 (delta=89.685843ms)
	I0211 02:02:28.123492   20276 fix.go:200] guest clock delta is within tolerance: 89.685843ms
	I0211 02:02:28.123501   20276 start.go:83] releasing machines lock for "addons-046133", held for 24.320026016s
	I0211 02:02:28.123543   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:28.123801   20276 main.go:141] libmachine: (addons-046133) Calling .GetIP
	I0211 02:02:28.126087   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:28.126361   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:28.126390   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:28.126491   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:28.126954   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:28.127118   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:28.127191   20276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0211 02:02:28.127252   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:28.127316   20276 ssh_runner.go:195] Run: cat /version.json
	I0211 02:02:28.127338   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:28.129834   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:28.130139   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:28.130165   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:28.130186   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:28.130272   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:28.130434   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:28.130572   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:28.130621   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:28.130649   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:28.130687   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:28.130827   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:28.130986   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:28.131135   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:28.131261   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:28.234594   20276 ssh_runner.go:195] Run: systemctl --version
	I0211 02:02:28.240424   20276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0211 02:02:28.396306   20276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0211 02:02:28.402017   20276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0211 02:02:28.402069   20276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 02:02:28.416633   20276 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0211 02:02:28.416656   20276 start.go:495] detecting cgroup driver to use...
	I0211 02:02:28.416711   20276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0211 02:02:28.431327   20276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0211 02:02:28.443938   20276 docker.go:217] disabling cri-docker service (if available) ...
	I0211 02:02:28.444005   20276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0211 02:02:28.456572   20276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0211 02:02:28.468960   20276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0211 02:02:28.578196   20276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0211 02:02:28.726347   20276 docker.go:233] disabling docker service ...
	I0211 02:02:28.726436   20276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0211 02:02:28.740207   20276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0211 02:02:28.752268   20276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0211 02:02:28.858365   20276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0211 02:02:28.968125   20276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0211 02:02:28.981389   20276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0211 02:02:28.998531   20276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0211 02:02:28.998602   20276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:29.008086   20276 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0211 02:02:29.008165   20276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:29.017391   20276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:29.026467   20276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:29.035686   20276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0211 02:02:29.044894   20276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:29.054004   20276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:29.070054   20276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:02:29.079629   20276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0211 02:02:29.088312   20276 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0211 02:02:29.088383   20276 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0211 02:02:29.100621   20276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0211 02:02:29.109328   20276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:02:29.212383   20276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0211 02:02:29.297460   20276 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0211 02:02:29.297558   20276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0211 02:02:29.301734   20276 start.go:563] Will wait 60s for crictl version
	I0211 02:02:29.301799   20276 ssh_runner.go:195] Run: which crictl
	I0211 02:02:29.304970   20276 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0211 02:02:29.344496   20276 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0211 02:02:29.344636   20276 ssh_runner.go:195] Run: crio --version
	I0211 02:02:29.376405   20276 ssh_runner.go:195] Run: crio --version
	I0211 02:02:29.403477   20276 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0211 02:02:29.404823   20276 main.go:141] libmachine: (addons-046133) Calling .GetIP
	I0211 02:02:29.407381   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:29.407702   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:29.407723   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:29.407936   20276 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0211 02:02:29.411577   20276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 02:02:29.422940   20276 kubeadm.go:883] updating cluster {Name:addons-046133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-046133 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0211 02:02:29.423076   20276 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:02:29.423116   20276 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 02:02:29.452575   20276 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0211 02:02:29.452631   20276 ssh_runner.go:195] Run: which lz4
	I0211 02:02:29.456212   20276 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0211 02:02:29.459767   20276 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0211 02:02:29.459804   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0211 02:02:30.608023   20276 crio.go:462] duration metric: took 1.151841222s to copy over tarball
	I0211 02:02:30.608095   20276 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0211 02:02:32.735266   20276 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127142844s)
	I0211 02:02:32.735305   20276 crio.go:469] duration metric: took 2.127249571s to extract the tarball
	I0211 02:02:32.735316   20276 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0211 02:02:32.771625   20276 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 02:02:32.812284   20276 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 02:02:32.812307   20276 cache_images.go:84] Images are preloaded, skipping loading
	I0211 02:02:32.812315   20276 kubeadm.go:934] updating node { 192.168.39.211 8443 v1.32.1 crio true true} ...
	I0211 02:02:32.812435   20276 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-046133 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:addons-046133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0211 02:02:32.812499   20276 ssh_runner.go:195] Run: crio config
	I0211 02:02:32.862234   20276 cni.go:84] Creating CNI manager for ""
	I0211 02:02:32.862260   20276 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 02:02:32.862269   20276 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 02:02:32.862288   20276 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.211 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-046133 NodeName:addons-046133 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0211 02:02:32.862410   20276 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-046133"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.211"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.211"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 02:02:32.862467   20276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0211 02:02:32.880644   20276 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 02:02:32.880714   20276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 02:02:32.893837   20276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0211 02:02:32.911909   20276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 02:02:32.929053   20276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0211 02:02:32.944466   20276 ssh_runner.go:195] Run: grep 192.168.39.211	control-plane.minikube.internal$ /etc/hosts
	I0211 02:02:32.948016   20276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 02:02:32.958522   20276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:02:33.077815   20276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 02:02:33.097559   20276 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133 for IP: 192.168.39.211
	I0211 02:02:33.097579   20276 certs.go:194] generating shared ca certs ...
	I0211 02:02:33.097593   20276 certs.go:226] acquiring lock for ca certs: {Name:mk14e70e4f3b98aff6eac535114852cc1d70eb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:33.097751   20276 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key
	I0211 02:02:33.292840   20276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt ...
	I0211 02:02:33.292867   20276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt: {Name:mkca0ea72947aed75fbffb3d0cc4274ac2d656f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:33.293020   20276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key ...
	I0211 02:02:33.293031   20276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key: {Name:mka0ca263b3be2ce5ab56e85d64a8f0473acbb86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:33.293120   20276 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key
	I0211 02:02:33.399981   20276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.crt ...
	I0211 02:02:33.400009   20276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.crt: {Name:mkd72cf4fc900efeb4557102905edffd22b9e425 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:33.400163   20276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key ...
	I0211 02:02:33.400174   20276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key: {Name:mk231dd8479890b6d97613238f0697758475efea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:33.400256   20276 certs.go:256] generating profile certs ...
	I0211 02:02:33.400305   20276 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.key
	I0211 02:02:33.400329   20276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt with IP's: []
	I0211 02:02:33.538022   20276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt ...
	I0211 02:02:33.538051   20276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: {Name:mk4d4693fcb29aa51886be85f7016f01f4f59136 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:33.538207   20276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.key ...
	I0211 02:02:33.538218   20276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.key: {Name:mkb44474e2b655105a83e1d2e0b20ab745d3f355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:33.538287   20276 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/apiserver.key.6b96eefa
	I0211 02:02:33.538304   20276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/apiserver.crt.6b96eefa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.211]
	I0211 02:02:33.710194   20276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/apiserver.crt.6b96eefa ...
	I0211 02:02:33.710220   20276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/apiserver.crt.6b96eefa: {Name:mk5b449a6adbb1c07fd555287d4a632ded61527c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:33.710368   20276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/apiserver.key.6b96eefa ...
	I0211 02:02:33.710381   20276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/apiserver.key.6b96eefa: {Name:mk11797712782a194c9f7dc01c44fff3b1a3f514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:33.710442   20276 certs.go:381] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/apiserver.crt.6b96eefa -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/apiserver.crt
	I0211 02:02:33.710520   20276 certs.go:385] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/apiserver.key.6b96eefa -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/apiserver.key
	I0211 02:02:33.710567   20276 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/proxy-client.key
	I0211 02:02:33.710585   20276 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/proxy-client.crt with IP's: []
	I0211 02:02:33.874511   20276 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/proxy-client.crt ...
	I0211 02:02:33.874539   20276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/proxy-client.crt: {Name:mk8f23ecad3a78482dfc183d57b48525203446ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:33.874689   20276 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/proxy-client.key ...
	I0211 02:02:33.874700   20276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/proxy-client.key: {Name:mk3d32cfee895cc8249f6875ff770aac46cbb5df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:33.874856   20276 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem (1675 bytes)
	I0211 02:02:33.874900   20276 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem (1078 bytes)
	I0211 02:02:33.874923   20276 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem (1123 bytes)
	I0211 02:02:33.874945   20276 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem (1679 bytes)
	I0211 02:02:33.875461   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 02:02:33.908941   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 02:02:33.945846   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 02:02:33.968465   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0211 02:02:33.990808   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0211 02:02:34.012658   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0211 02:02:34.035174   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 02:02:34.057703   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0211 02:02:34.080814   20276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 02:02:34.103207   20276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 02:02:34.118546   20276 ssh_runner.go:195] Run: openssl version
	I0211 02:02:34.124047   20276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 02:02:34.133747   20276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:02:34.137775   20276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:02:34.137816   20276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:02:34.143027   20276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 02:02:34.152566   20276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 02:02:34.156223   20276 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0211 02:02:34.156289   20276 kubeadm.go:392] StartCluster: {Name:addons-046133 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:addons-046133 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:02:34.156380   20276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 02:02:34.156437   20276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 02:02:34.189653   20276 cri.go:89] found id: ""
	I0211 02:02:34.189728   20276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0211 02:02:34.198721   20276 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 02:02:34.207587   20276 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 02:02:34.216027   20276 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 02:02:34.216047   20276 kubeadm.go:157] found existing configuration files:
	
	I0211 02:02:34.216085   20276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 02:02:34.224052   20276 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 02:02:34.224102   20276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 02:02:34.232285   20276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 02:02:34.240349   20276 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 02:02:34.240409   20276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 02:02:34.249423   20276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 02:02:34.257345   20276 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 02:02:34.257409   20276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 02:02:34.265737   20276 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 02:02:34.273866   20276 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 02:02:34.273923   20276 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 02:02:34.282551   20276 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0211 02:02:34.430243   20276 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 02:02:44.477752   20276 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0211 02:02:44.477827   20276 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 02:02:44.477922   20276 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 02:02:44.478047   20276 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 02:02:44.478192   20276 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0211 02:02:44.478262   20276 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 02:02:44.479649   20276 out.go:235]   - Generating certificates and keys ...
	I0211 02:02:44.479727   20276 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 02:02:44.479798   20276 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 02:02:44.479894   20276 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0211 02:02:44.479985   20276 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0211 02:02:44.480083   20276 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0211 02:02:44.480163   20276 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0211 02:02:44.480240   20276 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0211 02:02:44.480404   20276 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-046133 localhost] and IPs [192.168.39.211 127.0.0.1 ::1]
	I0211 02:02:44.480456   20276 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0211 02:02:44.480578   20276 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-046133 localhost] and IPs [192.168.39.211 127.0.0.1 ::1]
	I0211 02:02:44.480671   20276 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0211 02:02:44.480757   20276 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0211 02:02:44.480819   20276 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0211 02:02:44.480894   20276 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 02:02:44.480969   20276 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 02:02:44.481031   20276 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0211 02:02:44.481103   20276 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 02:02:44.481167   20276 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 02:02:44.481249   20276 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 02:02:44.481354   20276 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 02:02:44.481444   20276 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 02:02:44.482900   20276 out.go:235]   - Booting up control plane ...
	I0211 02:02:44.483007   20276 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 02:02:44.483111   20276 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 02:02:44.483195   20276 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 02:02:44.483348   20276 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 02:02:44.483472   20276 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 02:02:44.483509   20276 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 02:02:44.483620   20276 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0211 02:02:44.483727   20276 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0211 02:02:44.483811   20276 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.020617ms
	I0211 02:02:44.483908   20276 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0211 02:02:44.483994   20276 kubeadm.go:310] [api-check] The API server is healthy after 5.002013019s
	I0211 02:02:44.484163   20276 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0211 02:02:44.484331   20276 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0211 02:02:44.484415   20276 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0211 02:02:44.484607   20276 kubeadm.go:310] [mark-control-plane] Marking the node addons-046133 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0211 02:02:44.484687   20276 kubeadm.go:310] [bootstrap-token] Using token: im3ens.x5puvadslswl5mh3
	I0211 02:02:44.486862   20276 out.go:235]   - Configuring RBAC rules ...
	I0211 02:02:44.486982   20276 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0211 02:02:44.487076   20276 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0211 02:02:44.487271   20276 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0211 02:02:44.487441   20276 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0211 02:02:44.487586   20276 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0211 02:02:44.487724   20276 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0211 02:02:44.487896   20276 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0211 02:02:44.487947   20276 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0211 02:02:44.488009   20276 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0211 02:02:44.488019   20276 kubeadm.go:310] 
	I0211 02:02:44.488099   20276 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0211 02:02:44.488111   20276 kubeadm.go:310] 
	I0211 02:02:44.488241   20276 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0211 02:02:44.488254   20276 kubeadm.go:310] 
	I0211 02:02:44.488308   20276 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0211 02:02:44.488386   20276 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0211 02:02:44.488464   20276 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0211 02:02:44.488479   20276 kubeadm.go:310] 
	I0211 02:02:44.488563   20276 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0211 02:02:44.488572   20276 kubeadm.go:310] 
	I0211 02:02:44.488635   20276 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0211 02:02:44.488644   20276 kubeadm.go:310] 
	I0211 02:02:44.488692   20276 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0211 02:02:44.488756   20276 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0211 02:02:44.488836   20276 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0211 02:02:44.488846   20276 kubeadm.go:310] 
	I0211 02:02:44.488955   20276 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0211 02:02:44.489063   20276 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0211 02:02:44.489073   20276 kubeadm.go:310] 
	I0211 02:02:44.489193   20276 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token im3ens.x5puvadslswl5mh3 \
	I0211 02:02:44.489345   20276 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2e161f5cde2e462cb9fb80847b9945297701bdc8e7251bde04f5738d45684f8b \
	I0211 02:02:44.489376   20276 kubeadm.go:310] 	--control-plane 
	I0211 02:02:44.489390   20276 kubeadm.go:310] 
	I0211 02:02:44.489508   20276 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0211 02:02:44.489516   20276 kubeadm.go:310] 
	I0211 02:02:44.489636   20276 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token im3ens.x5puvadslswl5mh3 \
	I0211 02:02:44.489795   20276 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2e161f5cde2e462cb9fb80847b9945297701bdc8e7251bde04f5738d45684f8b 
	I0211 02:02:44.489808   20276 cni.go:84] Creating CNI manager for ""
	I0211 02:02:44.489820   20276 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 02:02:44.491975   20276 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0211 02:02:44.493237   20276 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0211 02:02:44.503564   20276 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0211 02:02:44.524889   20276 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0211 02:02:44.524958   20276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:44.524965   20276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-046133 minikube.k8s.io/updated_at=2025_02_11T02_02_44_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321 minikube.k8s.io/name=addons-046133 minikube.k8s.io/primary=true
	I0211 02:02:44.663084   20276 ops.go:34] apiserver oom_adj: -16
	I0211 02:02:44.663206   20276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:45.163548   20276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:45.663278   20276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:46.163528   20276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:46.664260   20276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:47.163721   20276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:47.664013   20276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:48.164077   20276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:48.794368   20276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:49.163314   20276 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 02:02:49.277746   20276 kubeadm.go:1113] duration metric: took 4.752848781s to wait for elevateKubeSystemPrivileges
	I0211 02:02:49.277780   20276 kubeadm.go:394] duration metric: took 15.121496902s to StartCluster
	I0211 02:02:49.277797   20276 settings.go:142] acquiring lock: {Name:mkf2645a714cc5873c434b18e1494d4128c48052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:49.277908   20276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 02:02:49.278249   20276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/kubeconfig: {Name:mkd961d61f748b29ba3bb0ad55f8216d88f98444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:49.278448   20276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0211 02:02:49.278458   20276 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.211 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 02:02:49.278524   20276 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0211 02:02:49.278631   20276 config.go:182] Loaded profile config "addons-046133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:02:49.278660   20276 addons.go:69] Setting yakd=true in profile "addons-046133"
	I0211 02:02:49.278679   20276 addons.go:69] Setting gcp-auth=true in profile "addons-046133"
	I0211 02:02:49.278681   20276 addons.go:69] Setting default-storageclass=true in profile "addons-046133"
	I0211 02:02:49.278692   20276 addons.go:238] Setting addon yakd=true in "addons-046133"
	I0211 02:02:49.278686   20276 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-046133"
	I0211 02:02:49.278699   20276 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-046133"
	I0211 02:02:49.278714   20276 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-046133"
	I0211 02:02:49.278728   20276 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-046133"
	I0211 02:02:49.278742   20276 addons.go:69] Setting volcano=true in profile "addons-046133"
	I0211 02:02:49.278752   20276 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-046133"
	I0211 02:02:49.278761   20276 addons.go:69] Setting volumesnapshots=true in profile "addons-046133"
	I0211 02:02:49.278766   20276 addons.go:69] Setting ingress-dns=true in profile "addons-046133"
	I0211 02:02:49.278774   20276 addons.go:238] Setting addon volumesnapshots=true in "addons-046133"
	I0211 02:02:49.278786   20276 addons.go:238] Setting addon ingress-dns=true in "addons-046133"
	I0211 02:02:49.278792   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.278798   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.278807   20276 addons.go:69] Setting metrics-server=true in profile "addons-046133"
	I0211 02:02:49.278819   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.278824   20276 addons.go:238] Setting addon metrics-server=true in "addons-046133"
	I0211 02:02:49.278848   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.279019   20276 addons.go:69] Setting inspektor-gadget=true in profile "addons-046133"
	I0211 02:02:49.279062   20276 addons.go:238] Setting addon inspektor-gadget=true in "addons-046133"
	I0211 02:02:49.279092   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.279126   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.279150   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.279190   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.278661   20276 addons.go:69] Setting ingress=true in profile "addons-046133"
	I0211 02:02:49.279201   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.279212   20276 addons.go:238] Setting addon ingress=true in "addons-046133"
	I0211 02:02:49.279217   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.279221   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.279236   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.279297   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.279324   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.279373   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.279391   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.279606   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.279639   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.279645   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.278731   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.279671   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.279842   20276 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-046133"
	I0211 02:02:49.279873   20276 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-046133"
	I0211 02:02:49.279898   20276 addons.go:69] Setting cloud-spanner=true in profile "addons-046133"
	I0211 02:02:49.279923   20276 addons.go:238] Setting addon cloud-spanner=true in "addons-046133"
	I0211 02:02:49.279938   20276 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-046133"
	I0211 02:02:49.279959   20276 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-046133"
	I0211 02:02:49.279969   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.279980   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.280106   20276 addons.go:69] Setting storage-provisioner=true in profile "addons-046133"
	I0211 02:02:49.280126   20276 addons.go:238] Setting addon storage-provisioner=true in "addons-046133"
	I0211 02:02:49.280149   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.278750   20276 addons.go:69] Setting registry=true in profile "addons-046133"
	I0211 02:02:49.280336   20276 addons.go:238] Setting addon registry=true in "addons-046133"
	I0211 02:02:49.280361   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.280377   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.280383   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.279195   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.280421   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.280453   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.280479   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.280497   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.278753   20276 addons.go:238] Setting addon volcano=true in "addons-046133"
	I0211 02:02:49.280385   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.278701   20276 mustload.go:65] Loading cluster: addons-046133
	I0211 02:02:49.280759   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.279924   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.280919   20276 config.go:182] Loaded profile config "addons-046133": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:02:49.281121   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.281140   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.281153   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.281172   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.281256   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.281341   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.281605   20276 out.go:177] * Verifying Kubernetes components...
	I0211 02:02:49.283068   20276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:02:49.297178   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37041
	I0211 02:02:49.300656   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I0211 02:02:49.300684   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0211 02:02:49.311609   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.311710   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.311825   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.311856   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.312284   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42103
	I0211 02:02:49.312325   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36377
	I0211 02:02:49.312436   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.312829   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.312849   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.312927   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.312949   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.313102   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.313118   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.313342   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.313358   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.313935   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.313957   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.313996   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.314010   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.314021   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.314059   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.314587   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.314624   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.315205   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.315242   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.324223   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.324321   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.327035   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.327143   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.327168   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.327432   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.327467   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.328076   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.328533   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.335001   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41231
	I0211 02:02:49.342988   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.344671   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45481
	I0211 02:02:49.344790   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.344808   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.345240   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.345249   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.345670   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.345686   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.345827   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.345868   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.346105   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.346260   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.348123   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.348464   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.348484   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.349999   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39397
	I0211 02:02:49.350353   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.350812   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.350837   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.351162   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.351415   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.353032   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.354965   20276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0211 02:02:49.355602   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38803
	I0211 02:02:49.355768   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38805
	I0211 02:02:49.356340   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.356842   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.356859   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.357057   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39939
	I0211 02:02:49.357163   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.357396   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43845
	I0211 02:02:49.357524   20276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0211 02:02:49.357781   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.357818   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.358355   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.358603   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.359042   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.359060   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.359367   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.359395   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.359454   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.359696   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.360329   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.360358   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.360726   20276 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0211 02:02:49.360833   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.360874   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.361195   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36685
	I0211 02:02:49.361678   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.362336   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.362355   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.362918   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.363202   20276 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0211 02:02:49.363768   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.363812   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.364761   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.365321   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.365338   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.365483   20276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0211 02:02:49.365844   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.366423   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.366446   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.368315   20276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0211 02:02:49.369542   20276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0211 02:02:49.370038   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33059
	I0211 02:02:49.370374   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.370834   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.370850   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.371356   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.371884   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.371924   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.372231   20276 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0211 02:02:49.373258   20276 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0211 02:02:49.373283   20276 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0211 02:02:49.373301   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.373622   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0211 02:02:49.374006   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.374483   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.374505   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.374791   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.374984   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.377124   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.377990   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.378011   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.378210   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.378356   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.378812   20276 addons.go:238] Setting addon default-storageclass=true in "addons-046133"
	I0211 02:02:49.378845   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.379061   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.379209   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.379237   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.379510   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.379578   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0211 02:02:49.379734   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I0211 02:02:49.379975   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.380107   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.380431   20276 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-046133"
	I0211 02:02:49.380474   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:49.380604   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.380618   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.380677   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.380691   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.380821   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.380838   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.382691   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35787
	I0211 02:02:49.382725   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.382773   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.383239   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.387337   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
	I0211 02:02:49.388335   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.388852   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33863
	I0211 02:02:49.390447   20276 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0211 02:02:49.391647   20276 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0211 02:02:49.391667   20276 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0211 02:02:49.391686   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.391857   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45957
	I0211 02:02:49.393755   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I0211 02:02:49.400407   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.400478   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.400493   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.400507   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.400629   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38859
	I0211 02:02:49.403041   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46427
	I0211 02:02:49.403101   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40205
	I0211 02:02:49.403126   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34225
	I0211 02:02:49.403132   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.403313   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.403514   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.403605   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.403620   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.403643   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.403647   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.403680   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.404072   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.404171   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.404188   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.404231   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.404263   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.404277   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.404291   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.404253   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.404334   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.404689   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.404778   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.404793   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.404834   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.404846   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.404856   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.404899   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.404919   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.404936   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.405445   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.405471   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.405576   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.405588   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.405617   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.405646   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.405647   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.405658   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.405750   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.405765   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.405960   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.406196   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.406249   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.406514   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.406555   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.406655   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.406686   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.406813   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.406828   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.406911   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.406923   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.407336   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.407348   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.407406   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.407539   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.407795   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40345
	I0211 02:02:49.408269   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.408721   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:49.408735   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:49.408714   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.409872   20276 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0211 02:02:49.410272   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.410287   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.410327   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.410335   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:49.410340   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.410349   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.410395   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:49.410405   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:49.410414   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:49.410422   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:49.410637   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:49.410660   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.410661   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:49.411026   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	W0211 02:02:49.411089   20276 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0211 02:02:49.411125   20276 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0211 02:02:49.411150   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0211 02:02:49.411167   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.411482   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.411494   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.411806   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.412030   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.412284   20276 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0211 02:02:49.412342   20276 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 02:02:49.412360   20276 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0211 02:02:49.412528   20276 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0211 02:02:49.413520   20276 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0211 02:02:49.413534   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0211 02:02:49.413556   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.413730   20276 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 02:02:49.413745   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0211 02:02:49.413771   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.414270   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.414660   20276 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0211 02:02:49.414676   20276 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0211 02:02:49.414692   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.415287   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.415884   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.415917   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.415972   20276 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0211 02:02:49.416169   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.416358   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.416519   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.416656   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.417033   20276 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0211 02:02:49.417100   20276 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0211 02:02:49.417119   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0211 02:02:49.417134   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.417579   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.417803   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.419117   20276 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0211 02:02:49.419131   20276 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0211 02:02:49.419984   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.420015   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.420146   20276 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0211 02:02:49.420163   20276 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0211 02:02:49.420178   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.420411   20276 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0211 02:02:49.420430   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0211 02:02:49.420432   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.420446   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.420788   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.420794   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.421525   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.421591   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.421605   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.421779   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.422789   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.422985   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.423127   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.423219   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.423476   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.423773   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.424998   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.425006   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.425025   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.425001   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.425029   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.425052   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.425144   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.425190   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.425345   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.425351   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.425515   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.425571   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.426438   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.427293   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.428560   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.428630   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.428645   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.428677   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.428692   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.428890   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.429082   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.429279   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.429693   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.429915   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.430150   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.430358   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.439481   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46503
	I0211 02:02:49.439795   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46803
	I0211 02:02:49.440028   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.440114   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.440699   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43467
	I0211 02:02:49.440819   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.440838   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.440853   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.440872   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.441144   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.441257   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.441297   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.441688   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.441706   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.441972   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.441994   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.442013   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.442034   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:49.442071   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:49.442074   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.443190   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41219
	I0211 02:02:49.443692   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.444279   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.444349   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.444364   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.445276   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.445830   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.445957   20276 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.28
	I0211 02:02:49.447003   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32833
	I0211 02:02:49.447100   20276 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0211 02:02:49.447114   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0211 02:02:49.447136   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.447412   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.447971   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.447989   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.448023   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.448429   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.448644   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.449552   20276 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0211 02:02:49.450520   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.450656   20276 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0211 02:02:49.450674   20276 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0211 02:02:49.450703   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.451055   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.451491   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.451518   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.451692   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.451853   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.451965   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.452075   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.452377   20276 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0211 02:02:49.453435   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.453805   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.453848   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.453910   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.454090   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.454235   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.454378   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.454732   20276 out.go:177]   - Using image docker.io/registry:2.8.3
	I0211 02:02:49.455889   20276 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0211 02:02:49.455930   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0211 02:02:49.455951   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.458749   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.459203   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.459229   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.459488   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.459644   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.459789   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.459909   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.463208   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0211 02:02:49.463553   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.463975   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.464016   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.464340   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.464529   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.466034   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.466354   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0211 02:02:49.466908   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:49.467596   20276 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0211 02:02:49.467630   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:49.467653   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:49.467951   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:49.468163   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:49.469740   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:49.469974   20276 out.go:177]   - Using image docker.io/busybox:stable
	I0211 02:02:49.470060   20276 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0211 02:02:49.470071   20276 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0211 02:02:49.470083   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.471234   20276 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0211 02:02:49.471256   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0211 02:02:49.471274   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:49.473071   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.473430   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.473459   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.473636   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.473829   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.473975   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.474133   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:49.474317   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.474714   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:49.474752   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:49.474881   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:49.475035   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:49.475172   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:49.475319   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	W0211 02:02:49.479687   20276 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47814->192.168.39.211:22: read: connection reset by peer
	I0211 02:02:49.479710   20276 retry.go:31] will retry after 335.25132ms: ssh: handshake failed: read tcp 192.168.39.1:47814->192.168.39.211:22: read: connection reset by peer
	I0211 02:02:49.708613   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0211 02:02:49.878543   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0211 02:02:49.882707   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0211 02:02:49.899299   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0211 02:02:49.925138   20276 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0211 02:02:49.925171   20276 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0211 02:02:49.935013   20276 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0211 02:02:49.935034   20276 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0211 02:02:49.975047   20276 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0211 02:02:49.975082   20276 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0211 02:02:50.006301   20276 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0211 02:02:50.006330   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0211 02:02:50.027614   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0211 02:02:50.044563   20276 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0211 02:02:50.044587   20276 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0211 02:02:50.064499   20276 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0211 02:02:50.064519   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0211 02:02:50.074147   20276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 02:02:50.074152   20276 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0211 02:02:50.076544   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0211 02:02:50.081514   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 02:02:50.118474   20276 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0211 02:02:50.118497   20276 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0211 02:02:50.151551   20276 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0211 02:02:50.151571   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0211 02:02:50.175097   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0211 02:02:50.217459   20276 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0211 02:02:50.217511   20276 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0211 02:02:50.231016   20276 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0211 02:02:50.231038   20276 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0211 02:02:50.274486   20276 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0211 02:02:50.274523   20276 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0211 02:02:50.362126   20276 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0211 02:02:50.362165   20276 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0211 02:02:50.375833   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0211 02:02:50.394668   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0211 02:02:50.419111   20276 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0211 02:02:50.419133   20276 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0211 02:02:50.438370   20276 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0211 02:02:50.438408   20276 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0211 02:02:50.505955   20276 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0211 02:02:50.505986   20276 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0211 02:02:50.618304   20276 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0211 02:02:50.618325   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0211 02:02:50.640045   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0211 02:02:50.647257   20276 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0211 02:02:50.647284   20276 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0211 02:02:50.708679   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.000025244s)
	I0211 02:02:50.708733   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:50.708748   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:50.708997   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:50.709017   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:50.709027   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:50.709036   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:50.709308   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:50.709322   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:50.713951   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:50.713972   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:50.714243   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:50.714255   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:50.777657   20276 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0211 02:02:50.777684   20276 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0211 02:02:50.897223   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0211 02:02:50.924508   20276 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0211 02:02:50.924541   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0211 02:02:51.129384   20276 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0211 02:02:51.129414   20276 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0211 02:02:51.136890   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0211 02:02:51.228454   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.349870604s)
	I0211 02:02:51.228508   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:51.228524   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:51.228764   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:51.228807   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:51.228815   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:51.228826   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:51.228838   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:51.229070   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:51.229095   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:51.429826   20276 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0211 02:02:51.429847   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0211 02:02:51.759545   20276 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0211 02:02:51.759581   20276 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0211 02:02:51.946379   20276 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0211 02:02:51.946399   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0211 02:02:52.098573   20276 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0211 02:02:52.098593   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0211 02:02:52.436794   20276 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0211 02:02:52.436828   20276 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0211 02:02:52.734373   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0211 02:02:52.861413   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.978673884s)
	I0211 02:02:52.861457   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:52.861477   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:52.861787   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:52.861806   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:52.861810   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:52.861819   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:52.861830   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:52.862078   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:52.862107   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.197750   20276 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0211 02:02:56.197798   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:56.200692   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:56.201171   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:56.201203   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:56.201403   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:56.201613   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:56.201769   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:56.201953   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:56.719357   20276 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0211 02:02:56.820755   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.921425258s)
	I0211 02:02:56.820809   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.820820   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.820814   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.79316641s)
	I0211 02:02:56.820830   20276 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.746652901s)
	I0211 02:02:56.820859   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.820869   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.820897   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.744330835s)
	I0211 02:02:56.820930   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.820944   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.820979   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.739445484s)
	I0211 02:02:56.821003   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.821019   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.820869   20276 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.746627884s)
	I0211 02:02:56.821062   20276 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0211 02:02:56.821069   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.64593131s)
	I0211 02:02:56.821087   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.821096   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.821182   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.445318817s)
	I0211 02:02:56.821196   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.821204   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.821233   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.821285   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.426584543s)
	I0211 02:02:56.822916   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.822926   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.821354   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.181282917s)
	I0211 02:02:56.822974   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.822983   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.821399   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.924150342s)
	I0211 02:02:56.823021   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.823028   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.821499   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.684580101s)
	W0211 02:02:56.823273   20276 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0211 02:02:56.823297   20276 retry.go:31] will retry after 178.967279ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0211 02:02:56.821528   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.821552   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.821577   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.823328   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.823336   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.823343   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.821596   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.821613   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.823385   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.823392   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.823400   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.821626   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.821644   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.823497   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.823505   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.823511   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.821661   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.823752   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.823761   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.823768   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.821796   20276 node_ready.go:35] waiting up to 6m0s for node "addons-046133" to be "Ready" ...
	I0211 02:02:56.822222   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.824033   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.824042   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.824049   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.822244   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.824106   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.824113   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.824128   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.824134   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.825734   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.825751   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.825755   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.825761   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.825768   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.825777   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.825782   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.825793   20276 addons.go:479] Verifying addon ingress=true in "addons-046133"
	I0211 02:02:56.825802   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.825833   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.825847   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.825857   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.825878   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.825946   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.825994   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.826009   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.826084   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.826113   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.826149   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.826162   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.826167   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.826204   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.826170   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.826225   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.826185   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.826248   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.826259   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.826265   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.826190   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.826279   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.826301   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.826312   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.826579   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.826609   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.826616   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.826693   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.826727   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.827929   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.827940   20276 addons.go:479] Verifying addon metrics-server=true in "addons-046133"
	I0211 02:02:56.826947   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.828017   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.828022   20276 out.go:177] * Verifying ingress addon...
	I0211 02:02:56.828030   20276 addons.go:479] Verifying addon registry=true in "addons-046133"
	I0211 02:02:56.826974   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:56.829242   20276 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-046133 service yakd-dashboard -n yakd-dashboard
	
	I0211 02:02:56.829327   20276 out.go:177] * Verifying registry addon...
	I0211 02:02:56.829958   20276 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0211 02:02:56.831117   20276 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0211 02:02:56.839524   20276 node_ready.go:49] node "addons-046133" has status "Ready":"True"
	I0211 02:02:56.839539   20276 node_ready.go:38] duration metric: took 15.697705ms for node "addons-046133" to be "Ready" ...
	I0211 02:02:56.839547   20276 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 02:02:56.845779   20276 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0211 02:02:56.845792   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:56.877782   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:56.877807   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:56.878147   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:56.878172   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:56.885375   20276 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0211 02:02:56.885390   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:56.905551   20276 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-ndfgq" in "kube-system" namespace to be "Ready" ...
	I0211 02:02:56.989873   20276 addons.go:238] Setting addon gcp-auth=true in "addons-046133"
	I0211 02:02:56.989931   20276 host.go:66] Checking if "addons-046133" exists ...
	I0211 02:02:56.990338   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:56.990387   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:57.002735   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0211 02:02:57.005481   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39173
	I0211 02:02:57.006058   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:57.006679   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:57.006707   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:57.007078   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:57.007570   20276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:02:57.007609   20276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:02:57.022219   20276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39207
	I0211 02:02:57.022699   20276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:02:57.023265   20276 main.go:141] libmachine: Using API Version  1
	I0211 02:02:57.023294   20276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:02:57.023637   20276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:02:57.023886   20276 main.go:141] libmachine: (addons-046133) Calling .GetState
	I0211 02:02:57.025522   20276 main.go:141] libmachine: (addons-046133) Calling .DriverName
	I0211 02:02:57.025750   20276 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0211 02:02:57.025781   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHHostname
	I0211 02:02:57.028407   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:57.028787   20276 main.go:141] libmachine: (addons-046133) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:5c:34", ip: ""} in network mk-addons-046133: {Iface:virbr1 ExpiryTime:2025-02-11 03:02:19 +0000 UTC Type:0 Mac:52:54:00:c7:5c:34 Iaid: IPaddr:192.168.39.211 Prefix:24 Hostname:addons-046133 Clientid:01:52:54:00:c7:5c:34}
	I0211 02:02:57.028815   20276 main.go:141] libmachine: (addons-046133) DBG | domain addons-046133 has defined IP address 192.168.39.211 and MAC address 52:54:00:c7:5c:34 in network mk-addons-046133
	I0211 02:02:57.028916   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHPort
	I0211 02:02:57.029060   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHKeyPath
	I0211 02:02:57.029212   20276 main.go:141] libmachine: (addons-046133) Calling .GetSSHUsername
	I0211 02:02:57.029323   20276 sshutil.go:53] new ssh client: &{IP:192.168.39.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/addons-046133/id_rsa Username:docker}
	I0211 02:02:57.324895   20276 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-046133" context rescaled to 1 replicas
	I0211 02:02:57.337598   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:57.337685   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:57.837579   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:57.837579   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:58.342609   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:58.343171   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:58.615264   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.880845192s)
	I0211 02:02:58.615326   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:58.615342   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:58.615595   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:58.615644   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:58.615656   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:58.615663   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:58.615678   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:58.615880   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:58.615895   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:58.615906   20276 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-046133"
	I0211 02:02:58.615924   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:58.617664   20276 out.go:177] * Verifying csi-hostpath-driver addon...
	I0211 02:02:58.619721   20276 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0211 02:02:58.637216   20276 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0211 02:02:58.637234   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:58.809727   20276 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.783946489s)
	I0211 02:02:58.809832   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.807037778s)
	I0211 02:02:58.809906   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:58.809919   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:58.810170   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:58.810221   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:58.810237   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:58.810254   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:58.810266   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:58.810518   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:58.810531   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:58.811688   20276 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0211 02:02:58.813119   20276 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0211 02:02:58.814209   20276 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0211 02:02:58.814227   20276 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0211 02:02:58.833876   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:58.833904   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:58.861596   20276 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0211 02:02:58.861619   20276 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0211 02:02:58.879637   20276 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0211 02:02:58.879666   20276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0211 02:02:58.903109   20276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0211 02:02:58.912115   20276 pod_ready.go:103] pod "amd-gpu-device-plugin-ndfgq" in "kube-system" namespace has status "Ready":"False"
	I0211 02:02:59.124776   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:59.336868   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:59.337260   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:59.632059   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:02:59.920607   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:02:59.920609   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:02:59.968271   20276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.065119314s)
	I0211 02:02:59.968329   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:59.968338   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:59.968585   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:59.968608   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:59.968618   20276 main.go:141] libmachine: Making call to close driver server
	I0211 02:02:59.968626   20276 main.go:141] libmachine: (addons-046133) DBG | Closing plugin on server side
	I0211 02:02:59.968628   20276 main.go:141] libmachine: (addons-046133) Calling .Close
	I0211 02:02:59.968862   20276 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:02:59.968878   20276 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:02:59.969722   20276 addons.go:479] Verifying addon gcp-auth=true in "addons-046133"
	I0211 02:02:59.972210   20276 out.go:177] * Verifying gcp-auth addon...
	I0211 02:02:59.973892   20276 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0211 02:03:00.005866   20276 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0211 02:03:00.005884   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:00.133924   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:00.334721   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:00.334935   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:00.476896   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:00.623033   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:00.833345   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:00.833829   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:00.977075   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:01.123523   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:01.334833   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:01.334992   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:01.412657   20276 pod_ready.go:103] pod "amd-gpu-device-plugin-ndfgq" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:01.477236   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:01.623727   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:01.833529   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:01.834944   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:01.977453   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:02.123951   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:02.334749   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:02.335014   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:02.476527   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:02.623725   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:02.834007   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:02.834499   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:02.978045   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:03.122957   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:03.339302   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:03.339362   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:03.587129   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:03.589031   20276 pod_ready.go:103] pod "amd-gpu-device-plugin-ndfgq" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:03.627355   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:03.834518   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:03.834674   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:03.977264   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:04.123534   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:04.333516   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:04.334520   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:04.476937   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:04.622859   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:04.835300   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:04.835583   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:04.977024   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:05.124192   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:05.333451   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:05.334195   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:05.476949   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:05.625088   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:05.834457   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:05.835289   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:05.910825   20276 pod_ready.go:103] pod "amd-gpu-device-plugin-ndfgq" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:05.978931   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:06.124200   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:06.333889   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:06.335993   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:06.477099   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:06.624071   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:06.832736   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:06.833987   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:06.977208   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:07.123557   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:07.333391   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:07.335824   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:07.477409   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:07.626089   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:07.834756   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:07.834796   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:07.976678   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:08.122372   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:08.333725   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:08.334741   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:08.415740   20276 pod_ready.go:103] pod "amd-gpu-device-plugin-ndfgq" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:08.477817   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:08.622804   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:08.832778   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:08.834440   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:08.977765   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:09.123061   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:09.334004   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:09.334097   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:09.476949   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:09.623785   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:09.833695   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:09.833883   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:09.976839   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:10.123009   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:10.333888   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:10.334470   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:10.477030   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:10.623121   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:10.833726   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:10.834637   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:10.909947   20276 pod_ready.go:103] pod "amd-gpu-device-plugin-ndfgq" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:10.976462   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:11.123303   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:11.333848   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:11.334576   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:11.477224   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:11.623920   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:12.242756   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:12.242756   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:12.244649   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:12.245017   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:12.333320   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:12.334913   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:12.476842   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:12.623283   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:12.833716   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:12.833917   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:12.977516   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:13.123592   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:13.333366   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:13.334808   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:13.414221   20276 pod_ready.go:103] pod "amd-gpu-device-plugin-ndfgq" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:13.476788   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:13.622743   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:13.832652   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:13.834501   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:13.977598   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:14.369238   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:14.369414   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:14.369653   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:14.478073   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:14.623073   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:14.833612   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:14.834163   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:14.977557   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:15.123478   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:15.333000   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:15.333956   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:15.416529   20276 pod_ready.go:103] pod "amd-gpu-device-plugin-ndfgq" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:15.478122   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:15.623564   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:15.834183   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:15.834377   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:15.978061   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:16.123573   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:16.333411   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:16.335060   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:16.478970   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:16.623895   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:16.834677   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:16.835062   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:16.977489   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:17.123363   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:17.334074   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:17.334092   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:17.477744   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:17.636271   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:17.834393   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:17.834480   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:17.912622   20276 pod_ready.go:103] pod "amd-gpu-device-plugin-ndfgq" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:17.977779   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:18.122433   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:18.334303   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:18.334307   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:18.477501   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:18.623659   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:18.834903   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:18.835102   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:18.977338   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:19.124145   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:19.332838   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:19.333770   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:19.413463   20276 pod_ready.go:93] pod "amd-gpu-device-plugin-ndfgq" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:19.413485   20276 pod_ready.go:82] duration metric: took 22.507898472s for pod "amd-gpu-device-plugin-ndfgq" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:19.413495   20276 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-bzgtq" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:19.418028   20276 pod_ready.go:93] pod "coredns-668d6bf9bc-bzgtq" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:19.418049   20276 pod_ready.go:82] duration metric: took 4.546835ms for pod "coredns-668d6bf9bc-bzgtq" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:19.418058   20276 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-z9fz8" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:19.419688   20276 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-z9fz8" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-z9fz8" not found
	I0211 02:03:19.419704   20276 pod_ready.go:82] duration metric: took 1.641412ms for pod "coredns-668d6bf9bc-z9fz8" in "kube-system" namespace to be "Ready" ...
	E0211 02:03:19.419712   20276 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-z9fz8" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-z9fz8" not found
	I0211 02:03:19.419717   20276 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-046133" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:19.423520   20276 pod_ready.go:93] pod "etcd-addons-046133" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:19.423533   20276 pod_ready.go:82] duration metric: took 3.811037ms for pod "etcd-addons-046133" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:19.423540   20276 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-046133" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:19.427416   20276 pod_ready.go:93] pod "kube-apiserver-addons-046133" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:19.427436   20276 pod_ready.go:82] duration metric: took 3.889468ms for pod "kube-apiserver-addons-046133" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:19.427448   20276 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-046133" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:19.478249   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:19.610508   20276 pod_ready.go:93] pod "kube-controller-manager-addons-046133" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:19.610538   20276 pod_ready.go:82] duration metric: took 183.080531ms for pod "kube-controller-manager-addons-046133" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:19.610554   20276 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-76r2h" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:19.624714   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:19.832840   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:19.833959   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:19.976435   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:20.009740   20276 pod_ready.go:93] pod "kube-proxy-76r2h" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:20.009760   20276 pod_ready.go:82] duration metric: took 399.200013ms for pod "kube-proxy-76r2h" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:20.009769   20276 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-046133" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:20.122907   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:20.333362   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:20.334196   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:20.412626   20276 pod_ready.go:93] pod "kube-scheduler-addons-046133" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:20.412649   20276 pod_ready.go:82] duration metric: took 402.874752ms for pod "kube-scheduler-addons-046133" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:20.412658   20276 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-7fbb699795-c4gg7" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:20.477450   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:20.622618   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:20.833485   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:20.833522   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:20.977741   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:21.123449   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:21.332796   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:21.334267   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:21.477402   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:21.622485   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:21.834029   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:21.835125   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:21.976782   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:22.123824   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:22.333973   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:22.337451   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:22.418769   20276 pod_ready.go:103] pod "metrics-server-7fbb699795-c4gg7" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:22.477743   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:22.623792   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:22.835487   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:22.835567   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:22.976645   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:23.123545   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:23.333143   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:23.334015   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:23.476787   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:23.622625   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:23.834919   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:23.835029   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:23.977830   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:24.123377   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:24.334563   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:24.335052   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:24.478509   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:24.625102   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:24.834366   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:24.834479   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:24.917562   20276 pod_ready.go:103] pod "metrics-server-7fbb699795-c4gg7" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:24.976917   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:25.122609   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:25.333707   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:25.334322   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:25.477629   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:25.623514   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:25.833600   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:25.834803   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:25.977394   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:26.123749   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:26.332941   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:26.334021   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:26.477320   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:26.624264   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:26.833249   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:26.835350   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:26.918766   20276 pod_ready.go:103] pod "metrics-server-7fbb699795-c4gg7" in "kube-system" namespace has status "Ready":"False"
	I0211 02:03:26.977483   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:27.123657   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:27.340541   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:27.341835   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:27.489794   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:27.623631   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:27.833809   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:27.834847   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:27.917855   20276 pod_ready.go:93] pod "metrics-server-7fbb699795-c4gg7" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:27.917877   20276 pod_ready.go:82] duration metric: took 7.50521317s for pod "metrics-server-7fbb699795-c4gg7" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:27.917887   20276 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-j9p8p" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:27.921694   20276 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-j9p8p" in "kube-system" namespace has status "Ready":"True"
	I0211 02:03:27.921715   20276 pod_ready.go:82] duration metric: took 3.821389ms for pod "nvidia-device-plugin-daemonset-j9p8p" in "kube-system" namespace to be "Ready" ...
	I0211 02:03:27.921732   20276 pod_ready.go:39] duration metric: took 31.082175603s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 02:03:27.921746   20276 api_server.go:52] waiting for apiserver process to appear ...
	I0211 02:03:27.921790   20276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 02:03:27.940792   20276 api_server.go:72] duration metric: took 38.662298254s to wait for apiserver process to appear ...
	I0211 02:03:27.940819   20276 api_server.go:88] waiting for apiserver healthz status ...
	I0211 02:03:27.940840   20276 api_server.go:253] Checking apiserver healthz at https://192.168.39.211:8443/healthz ...
	I0211 02:03:27.945396   20276 api_server.go:279] https://192.168.39.211:8443/healthz returned 200:
	ok
	I0211 02:03:27.946223   20276 api_server.go:141] control plane version: v1.32.1
	I0211 02:03:27.946247   20276 api_server.go:131] duration metric: took 5.420891ms to wait for apiserver health ...
	I0211 02:03:27.946257   20276 system_pods.go:43] waiting for kube-system pods to appear ...
	I0211 02:03:27.949680   20276 system_pods.go:59] 18 kube-system pods found
	I0211 02:03:27.949704   20276 system_pods.go:61] "amd-gpu-device-plugin-ndfgq" [f7623e9c-a193-449d-9b25-4529b7d52ed5] Running
	I0211 02:03:27.949709   20276 system_pods.go:61] "coredns-668d6bf9bc-bzgtq" [adaf0b21-a407-4841-8fd0-d7e110fd507d] Running
	I0211 02:03:27.949714   20276 system_pods.go:61] "csi-hostpath-attacher-0" [804fff63-2d4b-4ad3-9abd-ce2bbb268678] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0211 02:03:27.949720   20276 system_pods.go:61] "csi-hostpath-resizer-0" [eefcbc10-1a7d-4e34-a323-1120072b5011] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0211 02:03:27.949728   20276 system_pods.go:61] "csi-hostpathplugin-z9ssb" [943bfbf6-8625-43b6-9e8e-3e895c97d3e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0211 02:03:27.949733   20276 system_pods.go:61] "etcd-addons-046133" [46c4d02e-912c-4edc-b489-d6e8b32d789f] Running
	I0211 02:03:27.949740   20276 system_pods.go:61] "kube-apiserver-addons-046133" [36b421da-bdca-4497-b69f-b0e6fcc299c5] Running
	I0211 02:03:27.949743   20276 system_pods.go:61] "kube-controller-manager-addons-046133" [923bb65f-473d-415f-acf2-fa40f310a493] Running
	I0211 02:03:27.949746   20276 system_pods.go:61] "kube-ingress-dns-minikube" [2cf3737a-7969-4ddc-9f51-dd9b5d4111a5] Running
	I0211 02:03:27.949749   20276 system_pods.go:61] "kube-proxy-76r2h" [5ef1cc31-3964-4acd-8302-4730917d0e9c] Running
	I0211 02:03:27.949752   20276 system_pods.go:61] "kube-scheduler-addons-046133" [46d87674-d5ff-4639-b209-17a3291ea1da] Running
	I0211 02:03:27.949756   20276 system_pods.go:61] "metrics-server-7fbb699795-c4gg7" [3bfecb29-117e-4bcd-9ef8-a1dd75da6f28] Running
	I0211 02:03:27.949760   20276 system_pods.go:61] "nvidia-device-plugin-daemonset-j9p8p" [0fcd8b34-feb0-44f0-830d-b4d79aa89065] Running
	I0211 02:03:27.949764   20276 system_pods.go:61] "registry-6c88467877-7ggp5" [19abba60-f7d5-44ce-9bd4-39e4c503abf4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0211 02:03:27.949768   20276 system_pods.go:61] "registry-proxy-zwrhv" [7949f39c-a8cd-4280-b842-e053bd5eaf1f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0211 02:03:27.949775   20276 system_pods.go:61] "snapshot-controller-68b874b76f-84tl8" [a72b8da8-d1f0-42d8-8357-d72e9d68eaf4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0211 02:03:27.949780   20276 system_pods.go:61] "snapshot-controller-68b874b76f-zdjg8" [d9f9d5e4-490b-4ce9-aa29-ee1624383789] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0211 02:03:27.949783   20276 system_pods.go:61] "storage-provisioner" [157227da-e1f6-42a9-aa1c-d3ba0a8d1a9d] Running
	I0211 02:03:27.949789   20276 system_pods.go:74] duration metric: took 3.526102ms to wait for pod list to return data ...
	I0211 02:03:27.949796   20276 default_sa.go:34] waiting for default service account to be created ...
	I0211 02:03:27.951748   20276 default_sa.go:45] found service account: "default"
	I0211 02:03:27.951764   20276 default_sa.go:55] duration metric: took 1.963353ms for default service account to be created ...
	I0211 02:03:27.951770   20276 system_pods.go:116] waiting for k8s-apps to be running ...
	I0211 02:03:27.954417   20276 system_pods.go:86] 18 kube-system pods found
	I0211 02:03:27.954437   20276 system_pods.go:89] "amd-gpu-device-plugin-ndfgq" [f7623e9c-a193-449d-9b25-4529b7d52ed5] Running
	I0211 02:03:27.954443   20276 system_pods.go:89] "coredns-668d6bf9bc-bzgtq" [adaf0b21-a407-4841-8fd0-d7e110fd507d] Running
	I0211 02:03:27.954451   20276 system_pods.go:89] "csi-hostpath-attacher-0" [804fff63-2d4b-4ad3-9abd-ce2bbb268678] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0211 02:03:27.954458   20276 system_pods.go:89] "csi-hostpath-resizer-0" [eefcbc10-1a7d-4e34-a323-1120072b5011] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0211 02:03:27.954465   20276 system_pods.go:89] "csi-hostpathplugin-z9ssb" [943bfbf6-8625-43b6-9e8e-3e895c97d3e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0211 02:03:27.954470   20276 system_pods.go:89] "etcd-addons-046133" [46c4d02e-912c-4edc-b489-d6e8b32d789f] Running
	I0211 02:03:27.954477   20276 system_pods.go:89] "kube-apiserver-addons-046133" [36b421da-bdca-4497-b69f-b0e6fcc299c5] Running
	I0211 02:03:27.954482   20276 system_pods.go:89] "kube-controller-manager-addons-046133" [923bb65f-473d-415f-acf2-fa40f310a493] Running
	I0211 02:03:27.954490   20276 system_pods.go:89] "kube-ingress-dns-minikube" [2cf3737a-7969-4ddc-9f51-dd9b5d4111a5] Running
	I0211 02:03:27.954493   20276 system_pods.go:89] "kube-proxy-76r2h" [5ef1cc31-3964-4acd-8302-4730917d0e9c] Running
	I0211 02:03:27.954496   20276 system_pods.go:89] "kube-scheduler-addons-046133" [46d87674-d5ff-4639-b209-17a3291ea1da] Running
	I0211 02:03:27.954501   20276 system_pods.go:89] "metrics-server-7fbb699795-c4gg7" [3bfecb29-117e-4bcd-9ef8-a1dd75da6f28] Running
	I0211 02:03:27.954505   20276 system_pods.go:89] "nvidia-device-plugin-daemonset-j9p8p" [0fcd8b34-feb0-44f0-830d-b4d79aa89065] Running
	I0211 02:03:27.954509   20276 system_pods.go:89] "registry-6c88467877-7ggp5" [19abba60-f7d5-44ce-9bd4-39e4c503abf4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0211 02:03:27.954516   20276 system_pods.go:89] "registry-proxy-zwrhv" [7949f39c-a8cd-4280-b842-e053bd5eaf1f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0211 02:03:27.954522   20276 system_pods.go:89] "snapshot-controller-68b874b76f-84tl8" [a72b8da8-d1f0-42d8-8357-d72e9d68eaf4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0211 02:03:27.954530   20276 system_pods.go:89] "snapshot-controller-68b874b76f-zdjg8" [d9f9d5e4-490b-4ce9-aa29-ee1624383789] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0211 02:03:27.954533   20276 system_pods.go:89] "storage-provisioner" [157227da-e1f6-42a9-aa1c-d3ba0a8d1a9d] Running
	I0211 02:03:27.954542   20276 system_pods.go:126] duration metric: took 2.766903ms to wait for k8s-apps to be running ...
	I0211 02:03:27.954550   20276 system_svc.go:44] waiting for kubelet service to be running ....
	I0211 02:03:27.954588   20276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:03:27.971451   20276 system_svc.go:56] duration metric: took 16.892163ms WaitForService to wait for kubelet
	I0211 02:03:27.971491   20276 kubeadm.go:582] duration metric: took 38.693000527s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 02:03:27.971514   20276 node_conditions.go:102] verifying NodePressure condition ...
	I0211 02:03:27.977076   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:28.009678   20276 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0211 02:03:28.009708   20276 node_conditions.go:123] node cpu capacity is 2
	I0211 02:03:28.009720   20276 node_conditions.go:105] duration metric: took 38.201554ms to run NodePressure ...
	I0211 02:03:28.009731   20276 start.go:241] waiting for startup goroutines ...
	I0211 02:03:28.123890   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:28.336260   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:28.336329   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:28.477042   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:28.623966   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:28.833276   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:28.833823   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:28.977328   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:29.123731   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:29.333943   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:29.334524   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:29.477239   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:29.623151   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:29.832628   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:29.833956   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:29.977294   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:30.123441   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:30.333684   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:30.335804   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:30.479273   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:30.624119   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:30.833245   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:30.834382   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:30.977893   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:31.123043   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:31.332860   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:31.334831   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:31.476744   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:31.622565   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:31.971086   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:31.971234   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:31.976227   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:32.123093   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:32.335420   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:32.335769   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:32.477359   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:32.719599   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:32.833907   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:32.834168   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:32.977637   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:33.122737   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:33.334193   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:33.334828   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:33.477350   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:33.623888   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:33.834004   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:33.834704   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:33.976762   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:34.123091   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:34.332917   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:34.334226   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:34.476835   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:34.622789   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:34.835437   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:34.835549   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:34.977700   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:35.317550   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:35.333149   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:35.334230   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:35.476756   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:35.623547   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:35.833418   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:35.835440   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:35.980662   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:36.124430   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:36.333971   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:36.336235   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:36.477258   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:36.625907   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:36.835916   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:36.836118   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:36.976671   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:37.124319   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:37.333179   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:37.334851   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:37.477663   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:37.622719   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:37.833625   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:37.834576   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:37.978209   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:38.123272   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:38.334160   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:38.334640   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:38.477257   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:38.623277   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:38.834189   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:38.834435   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:38.977634   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:39.124377   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:39.333352   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:39.334302   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:39.477159   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:39.623272   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:39.833338   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:39.834620   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:39.977437   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:40.123452   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:40.334322   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:40.334514   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:40.477613   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:40.623468   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:40.834023   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:40.836358   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:40.977707   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:41.122733   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:41.333740   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:41.334689   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:41.477421   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:41.624221   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:41.834125   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:41.835385   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:41.977707   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:42.122596   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:42.333678   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:42.335355   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:42.476931   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:42.622752   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:42.834594   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:42.834599   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:42.977317   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:43.123753   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:43.333699   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:43.334386   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:43.476934   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:43.623069   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:43.834053   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:43.835301   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:43.977558   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:44.124441   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:44.334977   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:44.335247   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0211 02:03:44.477228   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:44.623635   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:44.833765   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:44.834948   20276 kapi.go:107] duration metric: took 48.003829997s to wait for kubernetes.io/minikube-addons=registry ...
	I0211 02:03:44.976507   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:45.123540   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:45.333684   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:45.477235   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:45.623736   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:45.834059   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:45.976865   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:46.122980   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:46.332781   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:46.477543   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:46.623387   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:46.832647   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:46.977488   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:47.123247   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:47.332902   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:47.476445   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:47.623560   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:47.833867   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:47.977830   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:48.123760   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:48.611384   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:48.613131   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:48.623803   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:48.835727   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:48.976896   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:49.122820   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:49.333513   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:49.477196   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:49.623369   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:49.832978   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:49.976714   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:50.122556   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:50.333525   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:50.477057   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:50.623500   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:50.833799   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:50.977314   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:51.123112   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:51.332859   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:51.477182   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:51.622990   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:51.832643   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:51.978008   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:52.123053   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:52.333275   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:52.477133   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:52.623082   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:52.833122   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:52.978291   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:53.126234   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:53.333115   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:53.476860   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:53.624104   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:53.833031   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:54.001326   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:54.124253   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:54.334378   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:54.477687   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:54.622539   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:54.836292   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:54.977666   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:55.122862   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:55.333585   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:55.477749   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:55.624831   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:55.836007   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:55.976307   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:56.123430   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:56.333681   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:56.477129   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:56.623233   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:56.835785   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:56.977681   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:57.123245   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:57.339387   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:57.476953   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:57.622391   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:57.833046   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:57.976396   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:58.123610   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:58.335055   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:58.476726   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:58.622703   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:58.833867   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:58.978786   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:59.123388   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:59.333532   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:59.478302   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:03:59.623747   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:03:59.833273   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:03:59.976801   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:00.122704   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:04:00.344433   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:00.476982   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:00.623116   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:04:00.832991   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:01.055428   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:01.123324   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:04:01.334861   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:01.478287   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:01.623244   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:04:01.832852   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:01.977376   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:02.123605   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:04:02.333983   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:02.476789   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:02.623552   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:04:02.833951   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:02.977729   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:03.124125   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:04:03.333599   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:03.479673   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:03.633824   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0211 02:04:03.836149   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:03.979582   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:04.125795   20276 kapi.go:107] duration metric: took 1m5.506073726s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0211 02:04:04.334080   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:04.477098   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:04.833013   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:04.976983   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:05.334265   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:05.477299   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:05.833795   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:06.029595   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:06.334182   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:06.476792   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:06.832700   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:06.977143   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:07.333795   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:07.477406   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:07.833640   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:07.977108   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:08.333070   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:08.476535   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:08.832921   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:08.976796   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:09.333405   20276 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0211 02:04:09.478922   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:09.834840   20276 kapi.go:107] duration metric: took 1m13.004878913s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0211 02:04:09.977751   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:10.476592   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:10.977003   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:11.477513   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:11.976555   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:12.476524   20276 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0211 02:04:12.977103   20276 kapi.go:107] duration metric: took 1m13.003207566s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0211 02:04:12.978621   20276 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-046133 cluster.
	I0211 02:04:12.979755   20276 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0211 02:04:12.980938   20276 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0211 02:04:12.982135   20276 out.go:177] * Enabled addons: default-storageclass, nvidia-device-plugin, ingress-dns, inspektor-gadget, cloud-spanner, storage-provisioner, amd-gpu-device-plugin, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0211 02:04:12.983373   20276 addons.go:514] duration metric: took 1m23.704849351s for enable addons: enabled=[default-storageclass nvidia-device-plugin ingress-dns inspektor-gadget cloud-spanner storage-provisioner amd-gpu-device-plugin metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0211 02:04:12.983407   20276 start.go:246] waiting for cluster config update ...
	I0211 02:04:12.983423   20276 start.go:255] writing updated cluster config ...
	I0211 02:04:12.983662   20276 ssh_runner.go:195] Run: rm -f paused
	I0211 02:04:13.033304   20276 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0211 02:04:13.034901   20276 out.go:177] * Done! kubectl is now configured to use "addons-046133" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.102999121Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=824d39bc-7eb6-4398-90ca-02258899ad77 name=/runtime.v1.RuntimeService/Version
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.104174963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=63ec5bca-d54a-4eb7-9cc4-edb0b55d12eb name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.105308575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239630105283422,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=63ec5bca-d54a-4eb7-9cc4-edb0b55d12eb name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.105746358Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d47aebfb-a24a-4482-a811-095dbb757aef name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.105811239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d47aebfb-a24a-4482-a811-095dbb757aef name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.106116546Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4d4818c5da92b29f29700cb5d6cbc3e3c867ba69061357a314ffcfda6859e90,PodSandboxId:011a095225442cf3a317065c0784bdbe1db0e914c4d79f2d9a657f014c6397fc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739239491657126556,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8907db89-70f9-4576-b1e0-7316d1a91e4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32da8f3ba6c893f1439b4f26173d9164f670067cb198237e75c5ded7e4a2401,PodSandboxId:030abf55627602c348e672d4eb3a7dba817652a98cdd09eee1e942b3b3357d9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739239458881399020,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e67be71-2587-40c9-87e6-ff6a660a4097,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c482a85d181b801c051844ca3a60802823764262391b268346e8d0b7a0be78,PodSandboxId:300e905265757010b218a722e335d1597c9a1f8c2849ec40da7ebecaf20aa196,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739239449357296216,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-ndz5m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1044e5ff-d4ed-43b5-a83a-57dc8b1e1a7d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c6d9ac18ee2a061b0c125f33763c9fcea0c92d6a559832c8a6771281fb0ad2a9,PodSandboxId:a90a3bffbb0a5a9e17315d6ffce6b2cb28a58f2905bdef55cbbcd3de1fe5c35a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739239430692525767,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xtv7c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0b690312-e2e3-43c6-8baf-e5dcaff87405,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016d7ba23ac6e433e20278561f049445cdec8e456dab9238b0a0fe5684c2bbf7,PodSandboxId:9159e2d29d93025a7dbb49db1aa90c4dd8e1abe50cc42a4841009421c2738cb5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739239430597262138,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sv7t9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38621dfc-d49c-403d-9601-794010e7ceb9,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac0bc5f1157821b0de771a867cb16ba2dc7fe40e19d509e7f44d6ed6b31bb56,PodSandboxId:b29d59f6f9cced829ea9890dc4dbd12dd4ec7f06f83f6dfbd4c74fe2bf866182,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739239399007501884,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ndfgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7623e9c-a193-449d-9b25-4529b7d52ed5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ba7f577a1071e8b49c4c15db7d1195f5ca667162b8f6952bfd5ed491cfc5367,PodSandboxId:8259eb7ab4540d8126dfc6595063cac4816477425e5c8ffd146eb76b6c99dd32,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739239384668981896,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf3737a-7969-4ddc-9f51-dd9b5d4111a5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e57fb9ef4493801affaab700ec503bd4e6f39002e38b8ad507aaac87cc972b,PodSandboxId:f2f68188bdfbe99043381a7017fced538f35a45ac1384065157d03de63d65f41,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739239374875331102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157227da-e1f6-42a9-aa1c-d3ba0a8d1a9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e909df49e4b28a99ae23ac1f1b0987d4bca072f15daa32c09e5502717c32d573,PodSandboxId:f39ed36ddaa9214b73b6aaf150843511c4d13d793b3a3b0a58d6c65979f49aa6,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739239372074993459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bzgtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adaf0b21-a407-4841-8fd0-d7e110fd507d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:b91f7189aea56e3261321415be544cf2f98a048a58315ae9b42af963f48d8472,PodSandboxId:f85e101fbedf235dfc85887126654b60f0c460e3479e8c513287fcc262d01392,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739239369314942686,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76r2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef1cc31-3964-4acd-8302-4730917d0e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293eed0d79a6f3c405b98bdfde36
85efa982b52a2416e9eb6ee74eecf18a208c,PodSandboxId:e3806743fb244febd6950c73594cb9c18d93b7d14d4cc42e52bcec55d179f00b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739239358658533166,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e25008a2fc98ad740b6d14e4c89925df,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18601f8c523f61e5ffbef73199d3337ddabbaf2f31329d0308eb5f35c7ab2c47,PodSandbox
Id:093f567a57fdee1a634b106f02363340115f44ce8b679c70afdf46c7bb0d0b04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739239358598864229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1390d7acf439e76b5ed47a25a6cd2606,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76eb7a90fdcd81fb78f9e24143171fa1db11b0c2b5eaf64e774e40a2f4d126d,PodSandboxId:2bee84dd9ce53c
d73c2e5f31102e1bc85fc543f3480500013b1fa10083d1e2d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739239358616752526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa43f2e11662e39b75b9449a9b841e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:635a67389863b8e656bb823200e3e26f804a1f48396ef9070197309b5110063a,PodSandboxId:034d7
4a19206809882fb46b4486e44c78e3acfbf2602379f772ef961fd384030,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739239358567109754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9380104343ae684903af67269a402038,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d47aebfb-a24a-4482-a811-095dbb757aef name=/runtime.v1.RuntimeServ
ice/ListContainers
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.109000915Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=3ea492ca-4904-4621-9bbb-bc05a29e286c name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.109324663Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:081548143b5efca2566fa0e6fb48e14ea46bd0837be957e7ea168f388e21a2d9,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-ssp52,Uid:31c8cb8e-ae30-430b-a4ad-ff5007e2019e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239629164529619,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-ssp52,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31c8cb8e-ae30-430b-a4ad-ff5007e2019e,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:07:08.845815626Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:011a095225442cf3a317065c0784bdbe1db0e914c4d79f2d9a657f014c6397fc,Metadata:&PodSandboxMetadata{Name:nginx,Uid:8907db89-70f9-4576-b1e0-7316d1a91e4e,Namespace:default,Attempt:0,},St
ate:SANDBOX_READY,CreatedAt:1739239486575894147,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8907db89-70f9-4576-b1e0-7316d1a91e4e,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:04:46.264382623Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:030abf55627602c348e672d4eb3a7dba817652a98cdd09eee1e942b3b3357d9a,Metadata:&PodSandboxMetadata{Name:busybox,Uid:9e67be71-2587-40c9-87e6-ff6a660a4097,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239456453010848,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e67be71-2587-40c9-87e6-ff6a660a4097,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:04:16.144302415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:300e905265757010b218a
722e335d1597c9a1f8c2849ec40da7ebecaf20aa196,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-56d7c84fd4-ndz5m,Uid:1044e5ff-d4ed-43b5-a83a-57dc8b1e1a7d,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239441057604666,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-ndz5m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1044e5ff-d4ed-43b5-a83a-57dc8b1e1a7d,pod-template-hash: 56d7c84fd4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:02:56.663816145Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9159e2d29d93025a7dbb49db1aa90c4dd8e1abe50cc42a4841009421c2738cb5,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-sv7t9,Uid:38621dfc-d49c-403d-9601-794010e7ceb9,Namespace:ingress-nginx,Attempt:0,},S
tate:SANDBOX_NOTREADY,CreatedAt:1739239377315400190,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 22271712-5f41-4d9f-9c0d-39827977fddc,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 22271712-5f41-4d9f-9c0d-39827977fddc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-sv7t9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38621dfc-d49c-403d-9601-794010e7ceb9,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:02:56.767673486Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a90a3bffbb0a5a9e17315d6ffce6b2cb28a58f2905bdef55cbbcd3de1fe5c35a,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-xtv7c,Uid:0b690312-e2e3-43c6-8baf-e5dcaff87405,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,C
reatedAt:1739239377173906758,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid: 3adf98ef-b4f5-4f79-a24f-d766cf14800f,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 3adf98ef-b4f5-4f79-a24f-d766cf14800f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-xtv7c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0b690312-e2e3-43c6-8baf-e5dcaff87405,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:02:56.779475135Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2f68188bdfbe99043381a7017fced538f35a45ac1384065157d03de63d65f41,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:157227da-e1f6-42a9-aa1c-d3ba0a8d1a9d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239374374009382,Labels:map[string]
string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157227da-e1f6-42a9-aa1c-d3ba0a8d1a9d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io
/config.seen: 2025-02-11T02:02:54.062344251Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8259eb7ab4540d8126dfc6595063cac4816477425e5c8ffd146eb76b6c99dd32,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:2cf3737a-7969-4ddc-9f51-dd9b5d4111a5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239373602845741,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf3737a-7969-4ddc-9f51-dd9b5d4111a5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":
\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2025-02-11T02:02:52.974926271Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b29d59f6f9cced829ea9890dc4dbd12dd4ec7f06f83f6dfbd4c74fe2bf866182,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-ndfgq,Uid:f7623e9c-a193-449d-9b25-4529b7d52ed5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239371918456820,Labels:map[string]string{controller-revision-hash: 578b4c597,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-ndfgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7623e9
c-a193-449d-9b25-4529b7d52ed5,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:02:51.607324450Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f39ed36ddaa9214b73b6aaf150843511c4d13d793b3a3b0a58d6c65979f49aa6,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-bzgtq,Uid:adaf0b21-a407-4841-8fd0-d7e110fd507d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239369295042634,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-bzgtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adaf0b21-a407-4841-8fd0-d7e110fd507d,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:02:48.977321506Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f85e101fbedf235dfc85887126654b60f0c460e3479e8c513287fcc262d01392,Metadata:&PodSandboxM
etadata{Name:kube-proxy-76r2h,Uid:5ef1cc31-3964-4acd-8302-4730917d0e9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239369164264942,Labels:map[string]string{controller-revision-hash: 566d7b9f85,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-76r2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef1cc31-3964-4acd-8302-4730917d0e9c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:02:48.855031615Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e3806743fb244febd6950c73594cb9c18d93b7d14d4cc42e52bcec55d179f00b,Metadata:&PodSandboxMetadata{Name:etcd-addons-046133,Uid:e25008a2fc98ad740b6d14e4c89925df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239358448802894,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2500
8a2fc98ad740b6d14e4c89925df,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.211:2379,kubernetes.io/config.hash: e25008a2fc98ad740b6d14e4c89925df,kubernetes.io/config.seen: 2025-02-11T02:02:37.955882159Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2bee84dd9ce53cd73c2e5f31102e1bc85fc543f3480500013b1fa10083d1e2d9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-046133,Uid:0aa43f2e11662e39b75b9449a9b841e7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239358427597604,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa43f2e11662e39b75b9449a9b841e7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0aa43f2e11662e39b75b9449a9b841e7,kubernetes.io/config.seen: 2025-02-11T02:02:37.955867206
Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:093f567a57fdee1a634b106f02363340115f44ce8b679c70afdf46c7bb0d0b04,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-046133,Uid:1390d7acf439e76b5ed47a25a6cd2606,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239358413625857,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1390d7acf439e76b5ed47a25a6cd2606,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1390d7acf439e76b5ed47a25a6cd2606,kubernetes.io/config.seen: 2025-02-11T02:02:37.955880826Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:034d74a19206809882fb46b4486e44c78e3acfbf2602379f772ef961fd384030,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-046133,Uid:9380104343ae684903af67269a402038,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1
739239358406346355,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9380104343ae684903af67269a402038,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.211:8443,kubernetes.io/config.hash: 9380104343ae684903af67269a402038,kubernetes.io/config.seen: 2025-02-11T02:02:37.955862937Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3ea492ca-4904-4621-9bbb-bc05a29e286c name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.110105482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76a7dc26-a86d-403e-890e-fa354262dc44 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.110170783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76a7dc26-a86d-403e-890e-fa354262dc44 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.110468757Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4d4818c5da92b29f29700cb5d6cbc3e3c867ba69061357a314ffcfda6859e90,PodSandboxId:011a095225442cf3a317065c0784bdbe1db0e914c4d79f2d9a657f014c6397fc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739239491657126556,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8907db89-70f9-4576-b1e0-7316d1a91e4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32da8f3ba6c893f1439b4f26173d9164f670067cb198237e75c5ded7e4a2401,PodSandboxId:030abf55627602c348e672d4eb3a7dba817652a98cdd09eee1e942b3b3357d9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739239458881399020,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e67be71-2587-40c9-87e6-ff6a660a4097,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c482a85d181b801c051844ca3a60802823764262391b268346e8d0b7a0be78,PodSandboxId:300e905265757010b218a722e335d1597c9a1f8c2849ec40da7ebecaf20aa196,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739239449357296216,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-ndz5m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1044e5ff-d4ed-43b5-a83a-57dc8b1e1a7d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c6d9ac18ee2a061b0c125f33763c9fcea0c92d6a559832c8a6771281fb0ad2a9,PodSandboxId:a90a3bffbb0a5a9e17315d6ffce6b2cb28a58f2905bdef55cbbcd3de1fe5c35a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739239430692525767,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xtv7c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0b690312-e2e3-43c6-8baf-e5dcaff87405,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016d7ba23ac6e433e20278561f049445cdec8e456dab9238b0a0fe5684c2bbf7,PodSandboxId:9159e2d29d93025a7dbb49db1aa90c4dd8e1abe50cc42a4841009421c2738cb5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739239430597262138,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sv7t9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38621dfc-d49c-403d-9601-794010e7ceb9,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac0bc5f1157821b0de771a867cb16ba2dc7fe40e19d509e7f44d6ed6b31bb56,PodSandboxId:b29d59f6f9cced829ea9890dc4dbd12dd4ec7f06f83f6dfbd4c74fe2bf866182,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739239399007501884,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ndfgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7623e9c-a193-449d-9b25-4529b7d52ed5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ba7f577a1071e8b49c4c15db7d1195f5ca667162b8f6952bfd5ed491cfc5367,PodSandboxId:8259eb7ab4540d8126dfc6595063cac4816477425e5c8ffd146eb76b6c99dd32,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739239384668981896,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf3737a-7969-4ddc-9f51-dd9b5d4111a5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e57fb9ef4493801affaab700ec503bd4e6f39002e38b8ad507aaac87cc972b,PodSandboxId:f2f68188bdfbe99043381a7017fced538f35a45ac1384065157d03de63d65f41,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739239374875331102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157227da-e1f6-42a9-aa1c-d3ba0a8d1a9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e909df49e4b28a99ae23ac1f1b0987d4bca072f15daa32c09e5502717c32d573,PodSandboxId:f39ed36ddaa9214b73b6aaf150843511c4d13d793b3a3b0a58d6c65979f49aa6,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739239372074993459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bzgtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adaf0b21-a407-4841-8fd0-d7e110fd507d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:b91f7189aea56e3261321415be544cf2f98a048a58315ae9b42af963f48d8472,PodSandboxId:f85e101fbedf235dfc85887126654b60f0c460e3479e8c513287fcc262d01392,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739239369314942686,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76r2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef1cc31-3964-4acd-8302-4730917d0e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293eed0d79a6f3c405b98bdfde36
85efa982b52a2416e9eb6ee74eecf18a208c,PodSandboxId:e3806743fb244febd6950c73594cb9c18d93b7d14d4cc42e52bcec55d179f00b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739239358658533166,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e25008a2fc98ad740b6d14e4c89925df,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18601f8c523f61e5ffbef73199d3337ddabbaf2f31329d0308eb5f35c7ab2c47,PodSandbox
Id:093f567a57fdee1a634b106f02363340115f44ce8b679c70afdf46c7bb0d0b04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739239358598864229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1390d7acf439e76b5ed47a25a6cd2606,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76eb7a90fdcd81fb78f9e24143171fa1db11b0c2b5eaf64e774e40a2f4d126d,PodSandboxId:2bee84dd9ce53c
d73c2e5f31102e1bc85fc543f3480500013b1fa10083d1e2d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739239358616752526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa43f2e11662e39b75b9449a9b841e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:635a67389863b8e656bb823200e3e26f804a1f48396ef9070197309b5110063a,PodSandboxId:034d7
4a19206809882fb46b4486e44c78e3acfbf2602379f772ef961fd384030,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739239358567109754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9380104343ae684903af67269a402038,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76a7dc26-a86d-403e-890e-fa354262dc44 name=/runtime.v1.RuntimeServ
ice/ListContainers
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.111262683Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 31c8cb8e-ae30-430b-a4ad-ff5007e2019e,},},}" file="otel-collector/interceptors.go:62" id=be8ba365-b9fc-4636-9ba8-a8ac1d966502 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.111376735Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:081548143b5efca2566fa0e6fb48e14ea46bd0837be957e7ea168f388e21a2d9,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-ssp52,Uid:31c8cb8e-ae30-430b-a4ad-ff5007e2019e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239629164529619,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-ssp52,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31c8cb8e-ae30-430b-a4ad-ff5007e2019e,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:07:08.845815626Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=be8ba365-b9fc-4636-9ba8-a8ac1d966502 name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.111823931Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:081548143b5efca2566fa0e6fb48e14ea46bd0837be957e7ea168f388e21a2d9,Verbose:false,}" file="otel-collector/interceptors.go:62" id=3a7bcf96-7c02-4717-bcc0-bb952547310c name=/runtime.v1.RuntimeService/PodSandboxStatus
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.111913612Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:081548143b5efca2566fa0e6fb48e14ea46bd0837be957e7ea168f388e21a2d9,Metadata:&PodSandboxMetadata{Name:hello-world-app-7d9564db4-ssp52,Uid:31c8cb8e-ae30-430b-a4ad-ff5007e2019e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739239629164529619,Network:&PodSandboxNetworkStatus{Ip:10.244.0.33,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,UsernsOptions:nil,},},},Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-7d9564db4-ssp52,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 31c8cb8e-ae30-430b-a4ad-ff5007e2019e,pod-template-hash: 7d9564db4,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:07:08.845815626Z,kubernetes.io/config.source: api
,},RuntimeHandler:,},Info:map[string]string{},ContainersStatuses:[]*ContainerStatus{},Timestamp:0,}" file="otel-collector/interceptors.go:74" id=3a7bcf96-7c02-4717-bcc0-bb952547310c name=/runtime.v1.RuntimeService/PodSandboxStatus
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.112195529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 31c8cb8e-ae30-430b-a4ad-ff5007e2019e,},},}" file="otel-collector/interceptors.go:62" id=2747c8dc-8afc-40c4-aba7-a4dd7350c6d2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.112248347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2747c8dc-8afc-40c4-aba7-a4dd7350c6d2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.112297268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2747c8dc-8afc-40c4-aba7-a4dd7350c6d2 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.143202826Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ebba389-2cfa-4405-bb63-3a7283107536 name=/runtime.v1.RuntimeService/Version
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.143284344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ebba389-2cfa-4405-bb63-3a7283107536 name=/runtime.v1.RuntimeService/Version
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.144138391Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ea509d4-bd82-4efc-8dcb-c67d1bbaa48c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.145376111Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239630145350174,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ea509d4-bd82-4efc-8dcb-c67d1bbaa48c name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.145835382Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb0f7070-fbd5-48ca-9772-de6ca4d99485 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.145887235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb0f7070-fbd5-48ca-9772-de6ca4d99485 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:07:10 addons-046133 crio[660]: time="2025-02-11 02:07:10.146223072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b4d4818c5da92b29f29700cb5d6cbc3e3c867ba69061357a314ffcfda6859e90,PodSandboxId:011a095225442cf3a317065c0784bdbe1db0e914c4d79f2d9a657f014c6397fc,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111,State:CONTAINER_RUNNING,CreatedAt:1739239491657126556,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8907db89-70f9-4576-b1e0-7316d1a91e4e,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c32da8f3ba6c893f1439b4f26173d9164f670067cb198237e75c5ded7e4a2401,PodSandboxId:030abf55627602c348e672d4eb3a7dba817652a98cdd09eee1e942b3b3357d9a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1739239458881399020,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e67be71-2587-40c9-87e6-ff6a660a4097,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97c482a85d181b801c051844ca3a60802823764262391b268346e8d0b7a0be78,PodSandboxId:300e905265757010b218a722e335d1597c9a1f8c2849ec40da7ebecaf20aa196,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1739239449357296216,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-56d7c84fd4-ndz5m,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 1044e5ff-d4ed-43b5-a83a-57dc8b1e1a7d,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:c6d9ac18ee2a061b0c125f33763c9fcea0c92d6a559832c8a6771281fb0ad2a9,PodSandboxId:a90a3bffbb0a5a9e17315d6ffce6b2cb28a58f2905bdef55cbbcd3de1fe5c35a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05
ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739239430692525767,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xtv7c,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0b690312-e2e3-43c6-8baf-e5dcaff87405,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:016d7ba23ac6e433e20278561f049445cdec8e456dab9238b0a0fe5684c2bbf7,PodSandboxId:9159e2d29d93025a7dbb49db1aa90c4dd8e1abe50cc42a4841009421c2738cb5,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1739239430597262138,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-sv7t9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 38621dfc-d49c-403d-9601-794010e7ceb9,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aac0bc5f1157821b0de771a867cb16ba2dc7fe40e19d509e7f44d6ed6b31bb56,PodSandboxId:b29d59f6f9cced829ea9890dc4dbd12dd4ec7f06f83f6dfbd4c74fe2bf866182,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1739239399007501884,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-ndfgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7623e9c-a193-449d-9b25-4529b7d52ed5,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ba7f577a1071e8b49c4c15db7d1195f5ca667162b8f6952bfd5ed491cfc5367,PodSandboxId:8259eb7ab4540d8126dfc6595063cac4816477425e5c8ffd146eb76b6c99dd32,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de3
5f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1739239384668981896,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2cf3737a-7969-4ddc-9f51-dd9b5d4111a5,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7e57fb9ef4493801affaab700ec503bd4e6f39002e38b8ad507aaac87cc972b,PodSandboxId:f2f68188bdfbe99043381a7017fced538f35a45ac1384065157d03de63d65f41,Metadata:&ContainerMetadata{Name:st
orage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739239374875331102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 157227da-e1f6-42a9-aa1c-d3ba0a8d1a9d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e909df49e4b28a99ae23ac1f1b0987d4bca072f15daa32c09e5502717c32d573,PodSandboxId:f39ed36ddaa9214b73b6aaf150843511c4d13d793b3a3b0a58d6c65979f49aa6,Metadata:&ContainerMetadata{Name:coredns,Attemp
t:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1739239372074993459,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bzgtq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adaf0b21-a407-4841-8fd0-d7e110fd507d,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:b91f7189aea56e3261321415be544cf2f98a048a58315ae9b42af963f48d8472,PodSandboxId:f85e101fbedf235dfc85887126654b60f0c460e3479e8c513287fcc262d01392,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a,State:CONTAINER_RUNNING,CreatedAt:1739239369314942686,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-76r2h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef1cc31-3964-4acd-8302-4730917d0e9c,},Annotations:map[string]string{io.kubernetes.container.hash: 9ed8b293,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:293eed0d79a6f3c405b98bdfde36
85efa982b52a2416e9eb6ee74eecf18a208c,PodSandboxId:e3806743fb244febd6950c73594cb9c18d93b7d14d4cc42e52bcec55d179f00b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1739239358658533166,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e25008a2fc98ad740b6d14e4c89925df,},Annotations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18601f8c523f61e5ffbef73199d3337ddabbaf2f31329d0308eb5f35c7ab2c47,PodSandbox
Id:093f567a57fdee1a634b106f02363340115f44ce8b679c70afdf46c7bb0d0b04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1,State:CONTAINER_RUNNING,CreatedAt:1739239358598864229,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1390d7acf439e76b5ed47a25a6cd2606,},Annotations:map[string]string{io.kubernetes.container.hash: 29f857d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76eb7a90fdcd81fb78f9e24143171fa1db11b0c2b5eaf64e774e40a2f4d126d,PodSandboxId:2bee84dd9ce53c
d73c2e5f31102e1bc85fc543f3480500013b1fa10083d1e2d9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35,State:CONTAINER_RUNNING,CreatedAt:1739239358616752526,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0aa43f2e11662e39b75b9449a9b841e7,},Annotations:map[string]string{io.kubernetes.container.hash: 16baf0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:635a67389863b8e656bb823200e3e26f804a1f48396ef9070197309b5110063a,PodSandboxId:034d7
4a19206809882fb46b4486e44c78e3acfbf2602379f772ef961fd384030,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a,State:CONTAINER_RUNNING,CreatedAt:1739239358567109754,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-046133,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9380104343ae684903af67269a402038,},Annotations:map[string]string{io.kubernetes.container.hash: e764ba09,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb0f7070-fbd5-48ca-9772-de6ca4d99485 name=/runtime.v1.RuntimeServ
ice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b4d4818c5da92       docker.io/library/nginx@sha256:6666d93f054a3f4315894b76f2023f3da2fcb5ceb5f8d91625cca81623edd2da                              2 minutes ago       Running             nginx                     0                   011a095225442       nginx
	c32da8f3ba6c8       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   030abf5562760       busybox
	97c482a85d181       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   300e905265757       ingress-nginx-controller-56d7c84fd4-ndz5m
	c6d9ac18ee2a0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              patch                     0                   a90a3bffbb0a5       ingress-nginx-admission-patch-xtv7c
	016d7ba23ac6e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   9159e2d29d930       ingress-nginx-admission-create-sv7t9
	aac0bc5f11578       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     3 minutes ago       Running             amd-gpu-device-plugin     0                   b29d59f6f9cce       amd-gpu-device-plugin-ndfgq
	7ba7f577a1071       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   8259eb7ab4540       kube-ingress-dns-minikube
	a7e57fb9ef449       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   f2f68188bdfbe       storage-provisioner
	e909df49e4b28       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   f39ed36ddaa92       coredns-668d6bf9bc-bzgtq
	b91f7189aea56       e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a                                                             4 minutes ago       Running             kube-proxy                0                   f85e101fbedf2       kube-proxy-76r2h
	293eed0d79a6f       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc                                                             4 minutes ago       Running             etcd                      0                   e3806743fb244       etcd-addons-046133
	b76eb7a90fdcd       019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35                                                             4 minutes ago       Running             kube-controller-manager   0                   2bee84dd9ce53       kube-controller-manager-addons-046133
	18601f8c523f6       2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1                                                             4 minutes ago       Running             kube-scheduler            0                   093f567a57fde       kube-scheduler-addons-046133
	635a67389863b       95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a                                                             4 minutes ago       Running             kube-apiserver            0                   034d74a192068       kube-apiserver-addons-046133
	
	
	==> coredns [e909df49e4b28a99ae23ac1f1b0987d4bca072f15daa32c09e5502717c32d573] <==
	[INFO] 10.244.0.8:35862 - 12525 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000685582s
	[INFO] 10.244.0.8:35862 - 45064 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000107544s
	[INFO] 10.244.0.8:35862 - 8426 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000072929s
	[INFO] 10.244.0.8:35862 - 32132 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076846s
	[INFO] 10.244.0.8:35862 - 37979 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00005706s
	[INFO] 10.244.0.8:35862 - 62417 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000098063s
	[INFO] 10.244.0.8:35862 - 5828 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000068353s
	[INFO] 10.244.0.8:43080 - 16990 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000123645s
	[INFO] 10.244.0.8:43080 - 16681 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000060793s
	[INFO] 10.244.0.8:41498 - 18694 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000094793s
	[INFO] 10.244.0.8:41498 - 18453 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000558s
	[INFO] 10.244.0.8:32793 - 41673 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000135264s
	[INFO] 10.244.0.8:32793 - 41886 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000445516s
	[INFO] 10.244.0.8:47703 - 17996 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000099359s
	[INFO] 10.244.0.8:47703 - 17776 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000200319s
	[INFO] 10.244.0.23:46544 - 59229 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000322799s
	[INFO] 10.244.0.23:47461 - 33610 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000127642s
	[INFO] 10.244.0.23:49629 - 65197 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119546s
	[INFO] 10.244.0.23:58653 - 43975 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000069018s
	[INFO] 10.244.0.23:54922 - 50782 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000069627s
	[INFO] 10.244.0.23:50422 - 2732 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009859s
	[INFO] 10.244.0.23:44309 - 14753 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001098107s
	[INFO] 10.244.0.23:50138 - 57514 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000994005s
	[INFO] 10.244.0.25:41105 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000350325s
	[INFO] 10.244.0.25:44090 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000119921s
	
	
	==> describe nodes <==
	Name:               addons-046133
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-046133
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321
	                    minikube.k8s.io/name=addons-046133
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_11T02_02_44_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-046133
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Feb 2025 02:02:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-046133
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Feb 2025 02:07:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 11 Feb 2025 02:05:17 +0000   Tue, 11 Feb 2025 02:02:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 11 Feb 2025 02:05:17 +0000   Tue, 11 Feb 2025 02:02:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 11 Feb 2025 02:05:17 +0000   Tue, 11 Feb 2025 02:02:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 11 Feb 2025 02:05:17 +0000   Tue, 11 Feb 2025 02:02:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.211
	  Hostname:    addons-046133
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 4084c3b6646c400d904bf2b2bb490d51
	  System UUID:                4084c3b6-646c-400d-904b-f2b2bb490d51
	  Boot ID:                    b45b9cd4-eb8e-4e1c-9d65-4abdba350f67
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  default                     hello-world-app-7d9564db4-ssp52              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-ndz5m    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m14s
	  kube-system                 amd-gpu-device-plugin-ndfgq                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 coredns-668d6bf9bc-bzgtq                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m22s
	  kube-system                 etcd-addons-046133                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m27s
	  kube-system                 kube-apiserver-addons-046133                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-controller-manager-addons-046133        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-76r2h                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-scheduler-addons-046133                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m20s  kube-proxy       
	  Normal  Starting                 4m27s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m27s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m27s  kubelet          Node addons-046133 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s  kubelet          Node addons-046133 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s  kubelet          Node addons-046133 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m26s  kubelet          Node addons-046133 status is now: NodeReady
	  Normal  RegisteredNode           4m23s  node-controller  Node addons-046133 event: Registered Node addons-046133 in Controller
	
	
	==> dmesg <==
	[  +5.575543] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.294349] systemd-fstab-generator[1421]: Ignoring "noauto" option for root device
	[  +4.709841] kauditd_printk_skb: 114 callbacks suppressed
	[  +5.040217] kauditd_printk_skb: 131 callbacks suppressed
	[Feb11 02:03] kauditd_printk_skb: 79 callbacks suppressed
	[ +12.045139] kauditd_printk_skb: 5 callbacks suppressed
	[ +16.279911] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.434926] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.219567] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.080617] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.046680] kauditd_printk_skb: 36 callbacks suppressed
	[Feb11 02:04] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.293452] kauditd_printk_skb: 25 callbacks suppressed
	[  +7.086718] kauditd_printk_skb: 6 callbacks suppressed
	[ +12.130280] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.849318] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.006757] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.315036] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.703778] kauditd_printk_skb: 43 callbacks suppressed
	[Feb11 02:05] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.177446] kauditd_printk_skb: 46 callbacks suppressed
	[ +10.502715] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.331223] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.874855] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.522383] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [293eed0d79a6f3c405b98bdfde3685efa982b52a2416e9eb6ee74eecf18a208c] <==
	{"level":"info","ts":"2025-02-11T02:03:31.959222Z","caller":"traceutil/trace.go:171","msg":"trace[2075717639] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:916; }","duration":"134.775338ms","start":"2025-02-11T02:03:31.824438Z","end":"2025-02-11T02:03:31.959214Z","steps":["trace[2075717639] 'range keys from in-memory index tree'  (duration: 134.687973ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:03:32.705999Z","caller":"traceutil/trace.go:171","msg":"trace[278507103] transaction","detail":"{read_only:false; response_revision:917; number_of_response:1; }","duration":"165.894399ms","start":"2025-02-11T02:03:32.540056Z","end":"2025-02-11T02:03:32.705950Z","steps":["trace[278507103] 'process raft request'  (duration: 165.791946ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:03:32.706792Z","caller":"traceutil/trace.go:171","msg":"trace[1731372895] transaction","detail":"{read_only:false; response_revision:918; number_of_response:1; }","duration":"150.374148ms","start":"2025-02-11T02:03:32.556406Z","end":"2025-02-11T02:03:32.706780Z","steps":["trace[1731372895] 'process raft request'  (duration: 150.166211ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:03:35.304942Z","caller":"traceutil/trace.go:171","msg":"trace[1052243779] linearizableReadLoop","detail":"{readStateIndex:953; appliedIndex:952; }","duration":"193.445708ms","start":"2025-02-11T02:03:35.111479Z","end":"2025-02-11T02:03:35.304925Z","steps":["trace[1052243779] 'read index received'  (duration: 193.261537ms)","trace[1052243779] 'applied index is now lower than readState.Index'  (duration: 183.769µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-11T02:03:35.305034Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.536454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-11T02:03:35.305052Z","caller":"traceutil/trace.go:171","msg":"trace[1470602728] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:927; }","duration":"193.590422ms","start":"2025-02-11T02:03:35.111457Z","end":"2025-02-11T02:03:35.305047Z","steps":["trace[1470602728] 'agreement among raft nodes before linearized reading'  (duration: 193.539597ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:03:35.305231Z","caller":"traceutil/trace.go:171","msg":"trace[1559070190] transaction","detail":"{read_only:false; response_revision:927; number_of_response:1; }","duration":"309.267011ms","start":"2025-02-11T02:03:34.995955Z","end":"2025-02-11T02:03:35.305222Z","steps":["trace[1559070190] 'process raft request'  (duration: 308.827742ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-11T02:03:35.305319Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-11T02:03:34.995939Z","time spent":"309.323699ms","remote":"127.0.0.1:58640","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-046133\" mod_revision:889 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-046133\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-046133\" > >"}
	{"level":"info","ts":"2025-02-11T02:03:48.595524Z","caller":"traceutil/trace.go:171","msg":"trace[1483853468] linearizableReadLoop","detail":"{readStateIndex:1001; appliedIndex:1000; }","duration":"408.066586ms","start":"2025-02-11T02:03:48.187377Z","end":"2025-02-11T02:03:48.595443Z","steps":["trace[1483853468] 'read index received'  (duration: 405.761189ms)","trace[1483853468] 'applied index is now lower than readState.Index'  (duration: 2.304293ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-11T02:03:48.595820Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"408.389684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2025-02-11T02:03:48.596052Z","caller":"traceutil/trace.go:171","msg":"trace[446605639] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:971; }","duration":"408.662896ms","start":"2025-02-11T02:03:48.187313Z","end":"2025-02-11T02:03:48.595976Z","steps":["trace[446605639] 'agreement among raft nodes before linearized reading'  (duration: 408.31695ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-11T02:03:48.596127Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-11T02:03:48.187299Z","time spent":"408.800274ms","remote":"127.0.0.1:58434","response type":"/etcdserverpb.KV/Range","request count":0,"request size":120,"response count":4,"response size":31,"request content":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true "}
	{"level":"warn","ts":"2025-02-11T02:03:48.596132Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.431216ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-02-11T02:03:48.596463Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.029541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-11T02:03:48.596512Z","caller":"traceutil/trace.go:171","msg":"trace[1694202237] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:971; }","duration":"131.097902ms","start":"2025-02-11T02:03:48.465403Z","end":"2025-02-11T02:03:48.596501Z","steps":["trace[1694202237] 'agreement among raft nodes before linearized reading'  (duration: 131.028599ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-11T02:03:48.596972Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"194.482267ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-11T02:03:48.597019Z","caller":"traceutil/trace.go:171","msg":"trace[2031740987] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:971; }","duration":"194.55943ms","start":"2025-02-11T02:03:48.402450Z","end":"2025-02-11T02:03:48.597010Z","steps":["trace[2031740987] 'agreement among raft nodes before linearized reading'  (duration: 194.474764ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-11T02:03:48.597211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.130821ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-11T02:03:48.597253Z","caller":"traceutil/trace.go:171","msg":"trace[1424068132] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:971; }","duration":"245.173509ms","start":"2025-02-11T02:03:48.352072Z","end":"2025-02-11T02:03:48.597245Z","steps":["trace[1424068132] 'agreement among raft nodes before linearized reading'  (duration: 245.119325ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:03:48.596297Z","caller":"traceutil/trace.go:171","msg":"trace[1880543468] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:971; }","duration":"273.840261ms","start":"2025-02-11T02:03:48.322448Z","end":"2025-02-11T02:03:48.596288Z","steps":["trace[1880543468] 'agreement among raft nodes before linearized reading'  (duration: 273.438061ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:03:53.979833Z","caller":"traceutil/trace.go:171","msg":"trace[409124842] transaction","detail":"{read_only:false; response_revision:1016; number_of_response:1; }","duration":"103.456537ms","start":"2025-02-11T02:03:53.876363Z","end":"2025-02-11T02:03:53.979819Z","steps":["trace[409124842] 'process raft request'  (duration: 103.250769ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:04:01.041094Z","caller":"traceutil/trace.go:171","msg":"trace[909299998] transaction","detail":"{read_only:false; response_revision:1072; number_of_response:1; }","duration":"187.429296ms","start":"2025-02-11T02:04:00.853651Z","end":"2025-02-11T02:04:01.041081Z","steps":["trace[909299998] 'process raft request'  (duration: 187.133673ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:04:57.970162Z","caller":"traceutil/trace.go:171","msg":"trace[921827680] transaction","detail":"{read_only:false; response_revision:1439; number_of_response:1; }","duration":"183.310791ms","start":"2025-02-11T02:04:57.786821Z","end":"2025-02-11T02:04:57.970132Z","steps":["trace[921827680] 'process raft request'  (duration: 145.195065ms)","trace[921827680] 'compare'  (duration: 37.805448ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-11T02:05:09.536790Z","caller":"traceutil/trace.go:171","msg":"trace[1620955023] transaction","detail":"{read_only:false; response_revision:1553; number_of_response:1; }","duration":"152.007277ms","start":"2025-02-11T02:05:09.384769Z","end":"2025-02-11T02:05:09.536776Z","steps":["trace[1620955023] 'process raft request'  (duration: 151.665089ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-11T02:05:31.921933Z","caller":"traceutil/trace.go:171","msg":"trace[777240827] transaction","detail":"{read_only:false; response_revision:1668; number_of_response:1; }","duration":"289.331884ms","start":"2025-02-11T02:05:31.632582Z","end":"2025-02-11T02:05:31.921914Z","steps":["trace[777240827] 'process raft request'  (duration: 289.256542ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:07:10 up 5 min,  0 users,  load average: 0.50, 0.90, 0.46
	Linux addons-046133 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [635a67389863b8e656bb823200e3e26f804a1f48396ef9070197309b5110063a] <==
	I0211 02:03:27.555160       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0211 02:04:25.317618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.211:8443->192.168.39.1:44922: use of closed network connection
	E0211 02:04:25.485473       1 conn.go:339] Error on socket receive: read tcp 192.168.39.211:8443->192.168.39.1:44954: use of closed network connection
	I0211 02:04:46.130941       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0211 02:04:46.309399       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.203.213"}
	I0211 02:04:46.429767       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0211 02:04:47.469264       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0211 02:05:05.441254       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.225.25"}
	I0211 02:05:07.187884       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0211 02:05:22.108507       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0211 02:05:28.511491       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0211 02:05:33.492136       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0211 02:05:33.492238       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0211 02:05:33.517367       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0211 02:05:33.517423       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0211 02:05:33.526330       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0211 02:05:33.526384       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0211 02:05:33.555950       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0211 02:05:33.556004       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0211 02:05:33.588273       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0211 02:05:33.588591       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0211 02:05:34.526644       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0211 02:05:34.587952       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0211 02:05:34.677600       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0211 02:07:09.037134       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.55.169"}
	
	
	==> kube-controller-manager [b76eb7a90fdcd81fb78f9e24143171fa1db11b0c2b5eaf64e774e40a2f4d126d] <==
	E0211 02:06:12.794926       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0211 02:06:13.360541       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0211 02:06:13.361418       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0211 02:06:13.362381       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0211 02:06:13.362422       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0211 02:06:35.586067       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0211 02:06:35.586955       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotclasses"
	W0211 02:06:35.587786       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0211 02:06:35.587835       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0211 02:06:38.350325       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0211 02:06:38.352190       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0211 02:06:38.353318       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0211 02:06:38.353362       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0211 02:06:40.604985       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0211 02:06:40.605962       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshots"
	W0211 02:06:40.606610       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0211 02:06:40.606640       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0211 02:06:58.424244       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0211 02:06:58.425094       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="snapshot.storage.k8s.io/v1, Resource=volumesnapshotcontents"
	W0211 02:06:58.425846       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0211 02:06:58.425913       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0211 02:07:08.850859       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="38.302173ms"
	I0211 02:07:08.875660       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="24.7436ms"
	I0211 02:07:08.888589       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="12.855444ms"
	I0211 02:07:08.888669       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-7d9564db4" duration="40.715µs"
	
	
	==> kube-proxy [b91f7189aea56e3261321415be544cf2f98a048a58315ae9b42af963f48d8472] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0211 02:02:49.923539       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0211 02:02:49.941088       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.211"]
	E0211 02:02:49.941167       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0211 02:02:50.025222       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0211 02:02:50.025264       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0211 02:02:50.025287       1 server_linux.go:170] "Using iptables Proxier"
	I0211 02:02:50.037459       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0211 02:02:50.037773       1 server.go:497] "Version info" version="v1.32.1"
	I0211 02:02:50.037786       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0211 02:02:50.040547       1 config.go:199] "Starting service config controller"
	I0211 02:02:50.040578       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0211 02:02:50.040614       1 config.go:105] "Starting endpoint slice config controller"
	I0211 02:02:50.040631       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0211 02:02:50.040995       1 config.go:329] "Starting node config controller"
	I0211 02:02:50.041020       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0211 02:02:50.141224       1 shared_informer.go:320] Caches are synced for node config
	I0211 02:02:50.141224       1 shared_informer.go:320] Caches are synced for service config
	I0211 02:02:50.141238       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [18601f8c523f61e5ffbef73199d3337ddabbaf2f31329d0308eb5f35c7ab2c47] <==
	W0211 02:02:41.037764       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0211 02:02:41.038085       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:41.037829       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0211 02:02:41.038177       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:41.037836       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0211 02:02:41.038268       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:41.037843       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0211 02:02:41.038413       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:41.038641       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0211 02:02:41.038751       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:41.964906       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0211 02:02:41.965001       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0211 02:02:41.971015       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0211 02:02:41.971095       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:42.054301       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0211 02:02:42.055476       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:42.141797       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0211 02:02:42.141854       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:42.213253       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0211 02:02:42.213339       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:42.308353       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0211 02:02:42.308453       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0211 02:02:42.313873       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0211 02:02:42.313964       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0211 02:02:44.928464       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 11 02:06:43 addons-046133 kubelet[1229]: E0211 02:06:43.786165    1229 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 11 02:06:43 addons-046133 kubelet[1229]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 11 02:06:43 addons-046133 kubelet[1229]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 11 02:06:43 addons-046133 kubelet[1229]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 11 02:06:43 addons-046133 kubelet[1229]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 11 02:06:43 addons-046133 kubelet[1229]: E0211 02:06:43.966362    1229 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239603965439490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:06:43 addons-046133 kubelet[1229]: E0211 02:06:43.966487    1229 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239603965439490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:06:51 addons-046133 kubelet[1229]: I0211 02:06:51.766733    1229 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Feb 11 02:06:53 addons-046133 kubelet[1229]: E0211 02:06:53.968994    1229 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239613968511834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:06:53 addons-046133 kubelet[1229]: E0211 02:06:53.969654    1229 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239613968511834,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:07:03 addons-046133 kubelet[1229]: E0211 02:07:03.976338    1229 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239623972443783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:07:03 addons-046133 kubelet[1229]: E0211 02:07:03.976404    1229 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739239623972443783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595288,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.846062    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="b4b4d3d8-627f-4543-8aeb-1e54293c491c" containerName="task-pv-container"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.846110    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="943bfbf6-8625-43b6-9e8e-3e895c97d3e7" containerName="node-driver-registrar"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.846119    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="943bfbf6-8625-43b6-9e8e-3e895c97d3e7" containerName="csi-snapshotter"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.846127    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="943bfbf6-8625-43b6-9e8e-3e895c97d3e7" containerName="liveness-probe"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.846136    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="a72b8da8-d1f0-42d8-8357-d72e9d68eaf4" containerName="volume-snapshot-controller"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.846141    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="943bfbf6-8625-43b6-9e8e-3e895c97d3e7" containerName="csi-external-health-monitor-controller"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.846146    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="943bfbf6-8625-43b6-9e8e-3e895c97d3e7" containerName="hostpath"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.846152    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="eefcbc10-1a7d-4e34-a323-1120072b5011" containerName="csi-resizer"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.846157    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="943bfbf6-8625-43b6-9e8e-3e895c97d3e7" containerName="csi-provisioner"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.846161    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="d9f9d5e4-490b-4ce9-aa29-ee1624383789" containerName="volume-snapshot-controller"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.846166    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="5b67d9ea-4445-4ced-ad29-816a6909874b" containerName="local-path-provisioner"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.846173    1229 memory_manager.go:355] "RemoveStaleState removing state" podUID="804fff63-2d4b-4ad3-9abd-ce2bbb268678" containerName="csi-attacher"
	Feb 11 02:07:08 addons-046133 kubelet[1229]: I0211 02:07:08.907159    1229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qws6\" (UniqueName: \"kubernetes.io/projected/31c8cb8e-ae30-430b-a4ad-ff5007e2019e-kube-api-access-4qws6\") pod \"hello-world-app-7d9564db4-ssp52\" (UID: \"31c8cb8e-ae30-430b-a4ad-ff5007e2019e\") " pod="default/hello-world-app-7d9564db4-ssp52"
	
	
	==> storage-provisioner [a7e57fb9ef4493801affaab700ec503bd4e6f39002e38b8ad507aaac87cc972b] <==
	I0211 02:02:55.236448       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0211 02:02:55.262547       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0211 02:02:55.262598       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0211 02:02:55.271234       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0211 02:02:55.271591       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-046133_50f789c7-9e3e-414c-8a28-46428049c8ea!
	I0211 02:02:55.275450       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7de9ab8f-2de7-40c0-a732-008f53252513", APIVersion:"v1", ResourceVersion:"584", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-046133_50f789c7-9e3e-414c-8a28-46428049c8ea became leader
	I0211 02:02:55.373179       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-046133_50f789c7-9e3e-414c-8a28-46428049c8ea!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-046133 -n addons-046133
helpers_test.go:261: (dbg) Run:  kubectl --context addons-046133 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-7d9564db4-ssp52 ingress-nginx-admission-create-sv7t9 ingress-nginx-admission-patch-xtv7c
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-046133 describe pod hello-world-app-7d9564db4-ssp52 ingress-nginx-admission-create-sv7t9 ingress-nginx-admission-patch-xtv7c
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-046133 describe pod hello-world-app-7d9564db4-ssp52 ingress-nginx-admission-create-sv7t9 ingress-nginx-admission-patch-xtv7c: exit status 1 (62.500842ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-7d9564db4-ssp52
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-046133/192.168.39.211
	Start Time:       Tue, 11 Feb 2025 02:07:08 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=7d9564db4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-7d9564db4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4qws6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4qws6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-7d9564db4-ssp52 to addons-046133
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-sv7t9" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-xtv7c" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-046133 describe pod hello-world-app-7d9564db4-ssp52 ingress-nginx-admission-create-sv7t9 ingress-nginx-admission-patch-xtv7c: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-046133 addons disable ingress --alsologtostderr -v=1: (7.680653188s)
--- FAIL: TestAddons/parallel/Ingress (154.08s)

                                                
                                    
x
+
TestPreload (290.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-813040 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-813040 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m11.997130233s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-813040 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-813040 image pull gcr.io/k8s-minikube/busybox: (2.320383138s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-813040
E0211 02:57:23.754912   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-813040: (1m30.763498074s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-813040 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0211 02:58:59.286754   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:59:16.210399   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-813040 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.219232387s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-813040 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2025-02-11 02:59:37.154580941 +0000 UTC m=+3468.585651176
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-813040 -n test-preload-813040
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-813040 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-065377 ssh -n                                                                 | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:42 UTC | 11 Feb 25 02:42 UTC |
	|         | multinode-065377-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-065377 ssh -n multinode-065377 sudo cat                                       | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:42 UTC | 11 Feb 25 02:42 UTC |
	|         | /home/docker/cp-test_multinode-065377-m03_multinode-065377.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-065377 cp multinode-065377-m03:/home/docker/cp-test.txt                       | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:42 UTC | 11 Feb 25 02:42 UTC |
	|         | multinode-065377-m02:/home/docker/cp-test_multinode-065377-m03_multinode-065377-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-065377 ssh -n                                                                 | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:42 UTC | 11 Feb 25 02:42 UTC |
	|         | multinode-065377-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-065377 ssh -n multinode-065377-m02 sudo cat                                   | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:42 UTC | 11 Feb 25 02:42 UTC |
	|         | /home/docker/cp-test_multinode-065377-m03_multinode-065377-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-065377 node stop m03                                                          | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:42 UTC | 11 Feb 25 02:42 UTC |
	| node    | multinode-065377 node start                                                             | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:42 UTC | 11 Feb 25 02:43 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-065377                                                                | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:43 UTC |                     |
	| stop    | -p multinode-065377                                                                     | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:43 UTC | 11 Feb 25 02:46 UTC |
	| start   | -p multinode-065377                                                                     | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:46 UTC | 11 Feb 25 02:48 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-065377                                                                | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:48 UTC |                     |
	| node    | multinode-065377 node delete                                                            | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:48 UTC | 11 Feb 25 02:48 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-065377 stop                                                                   | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:48 UTC | 11 Feb 25 02:51 UTC |
	| start   | -p multinode-065377                                                                     | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:51 UTC | 11 Feb 25 02:53 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-065377                                                                | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:53 UTC |                     |
	| start   | -p multinode-065377-m02                                                                 | multinode-065377-m02 | jenkins | v1.35.0 | 11 Feb 25 02:53 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-065377-m03                                                                 | multinode-065377-m03 | jenkins | v1.35.0 | 11 Feb 25 02:53 UTC | 11 Feb 25 02:54 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-065377                                                                 | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:54 UTC |                     |
	| delete  | -p multinode-065377-m03                                                                 | multinode-065377-m03 | jenkins | v1.35.0 | 11 Feb 25 02:54 UTC | 11 Feb 25 02:54 UTC |
	| delete  | -p multinode-065377                                                                     | multinode-065377     | jenkins | v1.35.0 | 11 Feb 25 02:54 UTC | 11 Feb 25 02:54 UTC |
	| start   | -p test-preload-813040                                                                  | test-preload-813040  | jenkins | v1.35.0 | 11 Feb 25 02:54 UTC | 11 Feb 25 02:57 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-813040 image pull                                                          | test-preload-813040  | jenkins | v1.35.0 | 11 Feb 25 02:57 UTC | 11 Feb 25 02:57 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-813040                                                                  | test-preload-813040  | jenkins | v1.35.0 | 11 Feb 25 02:57 UTC | 11 Feb 25 02:58 UTC |
	| start   | -p test-preload-813040                                                                  | test-preload-813040  | jenkins | v1.35.0 | 11 Feb 25 02:58 UTC | 11 Feb 25 02:59 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-813040 image list                                                          | test-preload-813040  | jenkins | v1.35.0 | 11 Feb 25 02:59 UTC | 11 Feb 25 02:59 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 02:58:34
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 02:58:34.765798   51325 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:58:34.765934   51325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:58:34.765944   51325 out.go:358] Setting ErrFile to fd 2...
	I0211 02:58:34.765951   51325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:58:34.766120   51325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 02:58:34.766640   51325 out.go:352] Setting JSON to false
	I0211 02:58:34.767545   51325 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6066,"bootTime":1739236649,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:58:34.767634   51325 start.go:139] virtualization: kvm guest
	I0211 02:58:34.769912   51325 out.go:177] * [test-preload-813040] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:58:34.771790   51325 notify.go:220] Checking for updates...
	I0211 02:58:34.771825   51325 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 02:58:34.773103   51325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:58:34.774526   51325 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 02:58:34.775666   51325 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 02:58:34.776796   51325 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 02:58:34.777951   51325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 02:58:34.779401   51325 config.go:182] Loaded profile config "test-preload-813040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0211 02:58:34.779757   51325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:58:34.779830   51325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:58:34.794057   51325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33661
	I0211 02:58:34.794476   51325 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:58:34.795098   51325 main.go:141] libmachine: Using API Version  1
	I0211 02:58:34.795119   51325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:58:34.795458   51325 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:58:34.795618   51325 main.go:141] libmachine: (test-preload-813040) Calling .DriverName
	I0211 02:58:34.797235   51325 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0211 02:58:34.798449   51325 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:58:34.798701   51325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:58:34.798741   51325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:58:34.812700   51325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40051
	I0211 02:58:34.813034   51325 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:58:34.813488   51325 main.go:141] libmachine: Using API Version  1
	I0211 02:58:34.813506   51325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:58:34.813802   51325 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:58:34.814017   51325 main.go:141] libmachine: (test-preload-813040) Calling .DriverName
	I0211 02:58:34.846823   51325 out.go:177] * Using the kvm2 driver based on existing profile
	I0211 02:58:34.847978   51325 start.go:297] selected driver: kvm2
	I0211 02:58:34.847993   51325 start.go:901] validating driver "kvm2" against &{Name:test-preload-813040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-813040
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:58:34.848091   51325 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:58:34.848723   51325 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 02:58:34.848784   51325 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20400-12456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0211 02:58:34.862528   51325 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0211 02:58:34.862837   51325 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 02:58:34.862863   51325 cni.go:84] Creating CNI manager for ""
	I0211 02:58:34.862924   51325 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 02:58:34.862983   51325 start.go:340] cluster config:
	{Name:test-preload-813040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-813040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:58:34.863078   51325 iso.go:125] acquiring lock: {Name:mkf866c6e52b4efa55cc59a9f329105471716f9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 02:58:34.864688   51325 out.go:177] * Starting "test-preload-813040" primary control-plane node in "test-preload-813040" cluster
	I0211 02:58:34.865835   51325 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0211 02:58:34.892698   51325 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0211 02:58:34.892714   51325 cache.go:56] Caching tarball of preloaded images
	I0211 02:58:34.892839   51325 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0211 02:58:34.894295   51325 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0211 02:58:34.895364   51325 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0211 02:58:34.920095   51325 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0211 02:58:40.055121   51325 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0211 02:58:40.055218   51325 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0211 02:58:40.893093   51325 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0211 02:58:40.893223   51325 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/config.json ...
	I0211 02:58:40.893468   51325 start.go:360] acquireMachinesLock for test-preload-813040: {Name:mk0cbf79bfabdb28d0a301765db34c154a72eff0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0211 02:58:40.893535   51325 start.go:364] duration metric: took 45.08µs to acquireMachinesLock for "test-preload-813040"
	I0211 02:58:40.893552   51325 start.go:96] Skipping create...Using existing machine configuration
	I0211 02:58:40.893561   51325 fix.go:54] fixHost starting: 
	I0211 02:58:40.893817   51325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:58:40.893858   51325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:58:40.908006   51325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44817
	I0211 02:58:40.908443   51325 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:58:40.908938   51325 main.go:141] libmachine: Using API Version  1
	I0211 02:58:40.908962   51325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:58:40.909332   51325 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:58:40.909505   51325 main.go:141] libmachine: (test-preload-813040) Calling .DriverName
	I0211 02:58:40.909646   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetState
	I0211 02:58:40.911154   51325 fix.go:112] recreateIfNeeded on test-preload-813040: state=Stopped err=<nil>
	I0211 02:58:40.911190   51325 main.go:141] libmachine: (test-preload-813040) Calling .DriverName
	W0211 02:58:40.911347   51325 fix.go:138] unexpected machine state, will restart: <nil>
	I0211 02:58:40.913425   51325 out.go:177] * Restarting existing kvm2 VM for "test-preload-813040" ...
	I0211 02:58:40.915010   51325 main.go:141] libmachine: (test-preload-813040) Calling .Start
	I0211 02:58:40.915174   51325 main.go:141] libmachine: (test-preload-813040) starting domain...
	I0211 02:58:40.915196   51325 main.go:141] libmachine: (test-preload-813040) ensuring networks are active...
	I0211 02:58:40.915883   51325 main.go:141] libmachine: (test-preload-813040) Ensuring network default is active
	I0211 02:58:40.916238   51325 main.go:141] libmachine: (test-preload-813040) Ensuring network mk-test-preload-813040 is active
	I0211 02:58:40.916633   51325 main.go:141] libmachine: (test-preload-813040) getting domain XML...
	I0211 02:58:40.917290   51325 main.go:141] libmachine: (test-preload-813040) creating domain...
	I0211 02:58:42.093224   51325 main.go:141] libmachine: (test-preload-813040) waiting for IP...
	I0211 02:58:42.093954   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:42.094314   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:42.094394   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:42.094324   51376 retry.go:31] will retry after 264.850837ms: waiting for domain to come up
	I0211 02:58:42.360801   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:42.361272   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:42.361301   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:42.361233   51376 retry.go:31] will retry after 359.341161ms: waiting for domain to come up
	I0211 02:58:42.721782   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:42.722170   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:42.722222   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:42.722156   51376 retry.go:31] will retry after 399.38981ms: waiting for domain to come up
	I0211 02:58:43.122599   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:43.122961   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:43.122989   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:43.122936   51376 retry.go:31] will retry after 493.24533ms: waiting for domain to come up
	I0211 02:58:43.617530   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:43.617906   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:43.617936   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:43.617870   51376 retry.go:31] will retry after 505.071094ms: waiting for domain to come up
	I0211 02:58:44.124415   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:44.124781   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:44.124834   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:44.124750   51376 retry.go:31] will retry after 699.105695ms: waiting for domain to come up
	I0211 02:58:44.825386   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:44.825860   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:44.825893   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:44.825827   51376 retry.go:31] will retry after 903.277738ms: waiting for domain to come up
	I0211 02:58:45.730737   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:45.731064   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:45.731089   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:45.731021   51376 retry.go:31] will retry after 1.357299557s: waiting for domain to come up
	I0211 02:58:47.090025   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:47.090397   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:47.090432   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:47.090360   51376 retry.go:31] will retry after 1.258140693s: waiting for domain to come up
	I0211 02:58:48.351113   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:48.351492   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:48.351524   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:48.351464   51376 retry.go:31] will retry after 1.570799821s: waiting for domain to come up
	I0211 02:58:49.924461   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:49.924872   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:49.924903   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:49.924851   51376 retry.go:31] will retry after 2.638018775s: waiting for domain to come up
	I0211 02:58:52.564303   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:52.564730   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:52.564783   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:52.564695   51376 retry.go:31] will retry after 3.316317636s: waiting for domain to come up
	I0211 02:58:55.885071   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:55.885502   51325 main.go:141] libmachine: (test-preload-813040) DBG | unable to find current IP address of domain test-preload-813040 in network mk-test-preload-813040
	I0211 02:58:55.885524   51325 main.go:141] libmachine: (test-preload-813040) DBG | I0211 02:58:55.885457   51376 retry.go:31] will retry after 2.86329508s: waiting for domain to come up
	I0211 02:58:58.749871   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:58.750354   51325 main.go:141] libmachine: (test-preload-813040) found domain IP: 192.168.39.238
	I0211 02:58:58.750367   51325 main.go:141] libmachine: (test-preload-813040) reserving static IP address...
	I0211 02:58:58.750376   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has current primary IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:58.750823   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "test-preload-813040", mac: "52:54:00:e3:5d:00", ip: "192.168.39.238"} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:58:58.750853   51325 main.go:141] libmachine: (test-preload-813040) DBG | skip adding static IP to network mk-test-preload-813040 - found existing host DHCP lease matching {name: "test-preload-813040", mac: "52:54:00:e3:5d:00", ip: "192.168.39.238"}
	I0211 02:58:58.750864   51325 main.go:141] libmachine: (test-preload-813040) reserved static IP address 192.168.39.238 for domain test-preload-813040
	I0211 02:58:58.750887   51325 main.go:141] libmachine: (test-preload-813040) waiting for SSH...
	I0211 02:58:58.750909   51325 main.go:141] libmachine: (test-preload-813040) DBG | Getting to WaitForSSH function...
	I0211 02:58:58.753125   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:58.753439   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:58:58.753467   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:58.753596   51325 main.go:141] libmachine: (test-preload-813040) DBG | Using SSH client type: external
	I0211 02:58:58.753619   51325 main.go:141] libmachine: (test-preload-813040) DBG | Using SSH private key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/test-preload-813040/id_rsa (-rw-------)
	I0211 02:58:58.753664   51325 main.go:141] libmachine: (test-preload-813040) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20400-12456/.minikube/machines/test-preload-813040/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0211 02:58:58.753677   51325 main.go:141] libmachine: (test-preload-813040) DBG | About to run SSH command:
	I0211 02:58:58.753688   51325 main.go:141] libmachine: (test-preload-813040) DBG | exit 0
	I0211 02:58:58.878418   51325 main.go:141] libmachine: (test-preload-813040) DBG | SSH cmd err, output: <nil>: 
	I0211 02:58:58.878795   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetConfigRaw
	I0211 02:58:58.879461   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetIP
	I0211 02:58:58.881919   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:58.882271   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:58:58.882291   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:58.882517   51325 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/config.json ...
	I0211 02:58:58.882681   51325 machine.go:93] provisionDockerMachine start ...
	I0211 02:58:58.882697   51325 main.go:141] libmachine: (test-preload-813040) Calling .DriverName
	I0211 02:58:58.882913   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHHostname
	I0211 02:58:58.884888   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:58.885169   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:58:58.885214   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:58.885292   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHPort
	I0211 02:58:58.885458   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:58:58.885604   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:58:58.885704   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHUsername
	I0211 02:58:58.885824   51325 main.go:141] libmachine: Using SSH client type: native
	I0211 02:58:58.886028   51325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0211 02:58:58.886039   51325 main.go:141] libmachine: About to run SSH command:
	hostname
	I0211 02:58:58.990918   51325 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0211 02:58:58.990942   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetMachineName
	I0211 02:58:58.991196   51325 buildroot.go:166] provisioning hostname "test-preload-813040"
	I0211 02:58:58.991233   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetMachineName
	I0211 02:58:58.991451   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHHostname
	I0211 02:58:58.994071   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:58.994448   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:58:58.994473   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:58.994579   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHPort
	I0211 02:58:58.994735   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:58:58.994845   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:58:58.994987   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHUsername
	I0211 02:58:58.995104   51325 main.go:141] libmachine: Using SSH client type: native
	I0211 02:58:58.995275   51325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0211 02:58:58.995287   51325 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-813040 && echo "test-preload-813040" | sudo tee /etc/hostname
	I0211 02:58:59.111721   51325 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-813040
	
	I0211 02:58:59.111747   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHHostname
	I0211 02:58:59.114353   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.114732   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:58:59.114765   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.114915   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHPort
	I0211 02:58:59.115068   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:58:59.115226   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:58:59.115367   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHUsername
	I0211 02:58:59.115503   51325 main.go:141] libmachine: Using SSH client type: native
	I0211 02:58:59.115677   51325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0211 02:58:59.115694   51325 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-813040' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-813040/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-813040' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0211 02:58:59.226954   51325 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 02:58:59.226983   51325 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20400-12456/.minikube CaCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20400-12456/.minikube}
	I0211 02:58:59.227038   51325 buildroot.go:174] setting up certificates
	I0211 02:58:59.227051   51325 provision.go:84] configureAuth start
	I0211 02:58:59.227064   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetMachineName
	I0211 02:58:59.227345   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetIP
	I0211 02:58:59.229813   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.230072   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:58:59.230097   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.230212   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHHostname
	I0211 02:58:59.232295   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.232670   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:58:59.232703   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.232833   51325 provision.go:143] copyHostCerts
	I0211 02:58:59.232893   51325 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem, removing ...
	I0211 02:58:59.232915   51325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem
	I0211 02:58:59.232998   51325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem (1679 bytes)
	I0211 02:58:59.233171   51325 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem, removing ...
	I0211 02:58:59.233182   51325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem
	I0211 02:58:59.233224   51325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem (1078 bytes)
	I0211 02:58:59.233347   51325 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem, removing ...
	I0211 02:58:59.233357   51325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem
	I0211 02:58:59.233395   51325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem (1123 bytes)
	I0211 02:58:59.233503   51325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem org=jenkins.test-preload-813040 san=[127.0.0.1 192.168.39.238 localhost minikube test-preload-813040]
	I0211 02:58:59.451244   51325 provision.go:177] copyRemoteCerts
	I0211 02:58:59.451306   51325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0211 02:58:59.451333   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHHostname
	I0211 02:58:59.453993   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.454299   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:58:59.454337   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.454476   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHPort
	I0211 02:58:59.454646   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:58:59.454765   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHUsername
	I0211 02:58:59.454865   51325 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/test-preload-813040/id_rsa Username:docker}
	I0211 02:58:59.536521   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0211 02:58:59.557888   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0211 02:58:59.583514   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0211 02:58:59.609671   51325 provision.go:87] duration metric: took 382.607977ms to configureAuth
	I0211 02:58:59.609704   51325 buildroot.go:189] setting minikube options for container-runtime
	I0211 02:58:59.609905   51325 config.go:182] Loaded profile config "test-preload-813040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0211 02:58:59.610008   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHHostname
	I0211 02:58:59.613055   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.613433   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:58:59.613464   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.613604   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHPort
	I0211 02:58:59.613801   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:58:59.613929   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:58:59.614044   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHUsername
	I0211 02:58:59.614200   51325 main.go:141] libmachine: Using SSH client type: native
	I0211 02:58:59.614456   51325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0211 02:58:59.614480   51325 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0211 02:58:59.822863   51325 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0211 02:58:59.822894   51325 machine.go:96] duration metric: took 940.201406ms to provisionDockerMachine
	I0211 02:58:59.822906   51325 start.go:293] postStartSetup for "test-preload-813040" (driver="kvm2")
	I0211 02:58:59.822916   51325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0211 02:58:59.822931   51325 main.go:141] libmachine: (test-preload-813040) Calling .DriverName
	I0211 02:58:59.823198   51325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0211 02:58:59.823221   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHHostname
	I0211 02:58:59.825807   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.826149   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:58:59.826170   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.826328   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHPort
	I0211 02:58:59.826505   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:58:59.826669   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHUsername
	I0211 02:58:59.826808   51325 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/test-preload-813040/id_rsa Username:docker}
	I0211 02:58:59.910302   51325 ssh_runner.go:195] Run: cat /etc/os-release
	I0211 02:58:59.914239   51325 info.go:137] Remote host: Buildroot 2023.02.9
	I0211 02:58:59.914270   51325 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/addons for local assets ...
	I0211 02:58:59.914331   51325 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/files for local assets ...
	I0211 02:58:59.914412   51325 filesync.go:149] local asset: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem -> 196452.pem in /etc/ssl/certs
	I0211 02:58:59.914497   51325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0211 02:58:59.925621   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /etc/ssl/certs/196452.pem (1708 bytes)
	I0211 02:58:59.951565   51325 start.go:296] duration metric: took 128.646289ms for postStartSetup
	I0211 02:58:59.951618   51325 fix.go:56] duration metric: took 19.058056938s for fixHost
	I0211 02:58:59.951642   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHHostname
	I0211 02:58:59.954469   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.954835   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:58:59.954894   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:58:59.955020   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHPort
	I0211 02:58:59.955230   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:58:59.955395   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:58:59.955537   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHUsername
	I0211 02:58:59.955698   51325 main.go:141] libmachine: Using SSH client type: native
	I0211 02:58:59.955869   51325 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0211 02:58:59.955879   51325 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0211 02:59:00.063454   51325 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739242740.026021289
	
	I0211 02:59:00.063478   51325 fix.go:216] guest clock: 1739242740.026021289
	I0211 02:59:00.063491   51325 fix.go:229] Guest: 2025-02-11 02:59:00.026021289 +0000 UTC Remote: 2025-02-11 02:58:59.95162318 +0000 UTC m=+25.221574963 (delta=74.398109ms)
	I0211 02:59:00.063514   51325 fix.go:200] guest clock delta is within tolerance: 74.398109ms
	I0211 02:59:00.063520   51325 start.go:83] releasing machines lock for "test-preload-813040", held for 19.169973547s
	I0211 02:59:00.063541   51325 main.go:141] libmachine: (test-preload-813040) Calling .DriverName
	I0211 02:59:00.063784   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetIP
	I0211 02:59:00.066365   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:59:00.066679   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:59:00.066701   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:59:00.066853   51325 main.go:141] libmachine: (test-preload-813040) Calling .DriverName
	I0211 02:59:00.067369   51325 main.go:141] libmachine: (test-preload-813040) Calling .DriverName
	I0211 02:59:00.067531   51325 main.go:141] libmachine: (test-preload-813040) Calling .DriverName
	I0211 02:59:00.067630   51325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0211 02:59:00.067672   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHHostname
	I0211 02:59:00.067727   51325 ssh_runner.go:195] Run: cat /version.json
	I0211 02:59:00.067748   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHHostname
	I0211 02:59:00.070266   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:59:00.070601   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:59:00.070629   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:59:00.070786   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:59:00.070800   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHPort
	I0211 02:59:00.070987   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:59:00.071151   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHUsername
	I0211 02:59:00.071180   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:59:00.071209   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:59:00.071269   51325 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/test-preload-813040/id_rsa Username:docker}
	I0211 02:59:00.071340   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHPort
	I0211 02:59:00.071484   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:59:00.071634   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHUsername
	I0211 02:59:00.071757   51325 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/test-preload-813040/id_rsa Username:docker}
	I0211 02:59:00.168373   51325 ssh_runner.go:195] Run: systemctl --version
	I0211 02:59:00.174295   51325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0211 02:59:00.326913   51325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0211 02:59:00.332979   51325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0211 02:59:00.333031   51325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 02:59:00.348578   51325 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0211 02:59:00.348598   51325 start.go:495] detecting cgroup driver to use...
	I0211 02:59:00.348648   51325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0211 02:59:00.367840   51325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0211 02:59:00.382682   51325 docker.go:217] disabling cri-docker service (if available) ...
	I0211 02:59:00.382742   51325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0211 02:59:00.397192   51325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0211 02:59:00.411963   51325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0211 02:59:00.524890   51325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0211 02:59:00.666091   51325 docker.go:233] disabling docker service ...
	I0211 02:59:00.666139   51325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0211 02:59:00.679538   51325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0211 02:59:00.692269   51325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0211 02:59:00.834654   51325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0211 02:59:00.943627   51325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0211 02:59:00.956647   51325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0211 02:59:00.973948   51325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0211 02:59:00.974011   51325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:59:00.983967   51325 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0211 02:59:00.984054   51325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:59:00.994064   51325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:59:01.003947   51325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:59:01.013815   51325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0211 02:59:01.024055   51325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:59:01.033585   51325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:59:01.048592   51325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 02:59:01.057556   51325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0211 02:59:01.065826   51325 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0211 02:59:01.065891   51325 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0211 02:59:01.077798   51325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0211 02:59:01.086579   51325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:59:01.201742   51325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0211 02:59:01.284000   51325 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0211 02:59:01.284076   51325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0211 02:59:01.288759   51325 start.go:563] Will wait 60s for crictl version
	I0211 02:59:01.288801   51325 ssh_runner.go:195] Run: which crictl
	I0211 02:59:01.292196   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0211 02:59:01.329108   51325 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0211 02:59:01.329198   51325 ssh_runner.go:195] Run: crio --version
	I0211 02:59:01.354587   51325 ssh_runner.go:195] Run: crio --version
	I0211 02:59:01.382382   51325 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0211 02:59:01.384062   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetIP
	I0211 02:59:01.386805   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:59:01.387159   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:59:01.387189   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:59:01.387394   51325 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0211 02:59:01.390990   51325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 02:59:01.405302   51325 kubeadm.go:883] updating cluster {Name:test-preload-813040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-813040 Namespace:defa
ult APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0211 02:59:01.405415   51325 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0211 02:59:01.405462   51325 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 02:59:01.438861   51325 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0211 02:59:01.438936   51325 ssh_runner.go:195] Run: which lz4
	I0211 02:59:01.442373   51325 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0211 02:59:01.445884   51325 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0211 02:59:01.445910   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0211 02:59:02.827415   51325 crio.go:462] duration metric: took 1.38507362s to copy over tarball
	I0211 02:59:02.827491   51325 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0211 02:59:05.211620   51325 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.384102898s)
	I0211 02:59:05.211658   51325 crio.go:469] duration metric: took 2.384212882s to extract the tarball
	I0211 02:59:05.211667   51325 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0211 02:59:05.251743   51325 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 02:59:05.299762   51325 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0211 02:59:05.299790   51325 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0211 02:59:05.299855   51325 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 02:59:05.299870   51325 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0211 02:59:05.299887   51325 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0211 02:59:05.299901   51325 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0211 02:59:05.299915   51325 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0211 02:59:05.299932   51325 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0211 02:59:05.299949   51325 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0211 02:59:05.300003   51325 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0211 02:59:05.301342   51325 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0211 02:59:05.301342   51325 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0211 02:59:05.301344   51325 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 02:59:05.301342   51325 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0211 02:59:05.301422   51325 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0211 02:59:05.301424   51325 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0211 02:59:05.301344   51325 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0211 02:59:05.301422   51325 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0211 02:59:05.432743   51325 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0211 02:59:05.439989   51325 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0211 02:59:05.449112   51325 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0211 02:59:05.449738   51325 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0211 02:59:05.454544   51325 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0211 02:59:05.479355   51325 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0211 02:59:05.480853   51325 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0211 02:59:05.517481   51325 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0211 02:59:05.517525   51325 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0211 02:59:05.517575   51325 ssh_runner.go:195] Run: which crictl
	I0211 02:59:05.549847   51325 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0211 02:59:05.549891   51325 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0211 02:59:05.549900   51325 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0211 02:59:05.549925   51325 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0211 02:59:05.549944   51325 ssh_runner.go:195] Run: which crictl
	I0211 02:59:05.549965   51325 ssh_runner.go:195] Run: which crictl
	I0211 02:59:05.598374   51325 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0211 02:59:05.598411   51325 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0211 02:59:05.598453   51325 ssh_runner.go:195] Run: which crictl
	I0211 02:59:05.609617   51325 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0211 02:59:05.609648   51325 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0211 02:59:05.609700   51325 ssh_runner.go:195] Run: which crictl
	I0211 02:59:05.609697   51325 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0211 02:59:05.609767   51325 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0211 02:59:05.609768   51325 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0211 02:59:05.609793   51325 ssh_runner.go:195] Run: which crictl
	I0211 02:59:05.609799   51325 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0211 02:59:05.609840   51325 ssh_runner.go:195] Run: which crictl
	I0211 02:59:05.609855   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0211 02:59:05.609902   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0211 02:59:05.609958   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0211 02:59:05.609908   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0211 02:59:05.613599   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0211 02:59:05.625665   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0211 02:59:05.626080   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0211 02:59:05.706110   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0211 02:59:05.774430   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0211 02:59:05.774464   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0211 02:59:05.774559   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0211 02:59:05.784874   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0211 02:59:05.784911   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0211 02:59:05.785000   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0211 02:59:05.795256   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0211 02:59:05.915024   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0211 02:59:05.916952   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0211 02:59:05.917155   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0211 02:59:05.938828   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0211 02:59:05.938940   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0211 02:59:05.948258   51325 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0211 02:59:05.948326   51325 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0211 02:59:05.948363   51325 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0211 02:59:06.025259   51325 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0211 02:59:06.025372   51325 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0211 02:59:06.037337   51325 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0211 02:59:06.037462   51325 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0211 02:59:06.060640   51325 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0211 02:59:06.060761   51325 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0211 02:59:06.064426   51325 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0211 02:59:06.064502   51325 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0211 02:59:06.064538   51325 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0211 02:59:06.064550   51325 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0211 02:59:06.064564   51325 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0211 02:59:06.064594   51325 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0211 02:59:06.064607   51325 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0211 02:59:06.064614   51325 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0211 02:59:06.064645   51325 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0211 02:59:06.064680   51325 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0211 02:59:06.067129   51325 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0211 02:59:06.071336   51325 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0211 02:59:06.076323   51325 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0211 02:59:06.076468   51325 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0211 02:59:06.076878   51325 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0211 02:59:06.211823   51325 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 02:59:09.026227   51325 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.961596218s)
	I0211 02:59:09.026267   51325 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0211 02:59:09.026304   51325 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0211 02:59:09.026312   51325 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.814438896s)
	I0211 02:59:09.026362   51325 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0211 02:59:11.069640   51325 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.043254988s)
	I0211 02:59:11.069675   51325 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0211 02:59:11.069696   51325 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0211 02:59:11.069742   51325 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0211 02:59:11.508642   51325 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0211 02:59:11.508698   51325 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0211 02:59:11.508748   51325 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0211 02:59:11.850084   51325 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0211 02:59:11.850134   51325 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0211 02:59:11.850234   51325 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0211 02:59:12.690350   51325 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0211 02:59:12.690404   51325 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0211 02:59:12.690454   51325 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0211 02:59:13.432445   51325 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0211 02:59:13.432500   51325 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0211 02:59:13.432569   51325 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0211 02:59:13.575400   51325 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0211 02:59:13.575464   51325 cache_images.go:123] Successfully loaded all cached images
	I0211 02:59:13.575475   51325 cache_images.go:92] duration metric: took 8.275667495s to LoadCachedImages
	I0211 02:59:13.575492   51325 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.24.4 crio true true} ...
	I0211 02:59:13.575636   51325 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-813040 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-813040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0211 02:59:13.575710   51325 ssh_runner.go:195] Run: crio config
	I0211 02:59:13.618933   51325 cni.go:84] Creating CNI manager for ""
	I0211 02:59:13.618953   51325 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 02:59:13.618962   51325 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 02:59:13.618979   51325 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-813040 NodeName:test-preload-813040 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0211 02:59:13.619107   51325 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-813040"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 02:59:13.619169   51325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0211 02:59:13.628777   51325 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 02:59:13.628867   51325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 02:59:13.637698   51325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0211 02:59:13.653347   51325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 02:59:13.669220   51325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0211 02:59:13.684872   51325 ssh_runner.go:195] Run: grep 192.168.39.238	control-plane.minikube.internal$ /etc/hosts
	I0211 02:59:13.688579   51325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.238	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 02:59:13.700139   51325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:59:13.824779   51325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 02:59:13.841194   51325 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040 for IP: 192.168.39.238
	I0211 02:59:13.841220   51325 certs.go:194] generating shared ca certs ...
	I0211 02:59:13.841248   51325 certs.go:226] acquiring lock for ca certs: {Name:mk14e70e4f3b98aff6eac535114852cc1d70eb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:59:13.841450   51325 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key
	I0211 02:59:13.841514   51325 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key
	I0211 02:59:13.841530   51325 certs.go:256] generating profile certs ...
	I0211 02:59:13.841636   51325 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/client.key
	I0211 02:59:13.841726   51325 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/apiserver.key.0d2fdce6
	I0211 02:59:13.841775   51325 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/proxy-client.key
	I0211 02:59:13.841934   51325 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem (1338 bytes)
	W0211 02:59:13.841970   51325 certs.go:480] ignoring /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645_empty.pem, impossibly tiny 0 bytes
	I0211 02:59:13.841980   51325 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem (1675 bytes)
	I0211 02:59:13.842004   51325 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem (1078 bytes)
	I0211 02:59:13.842028   51325 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem (1123 bytes)
	I0211 02:59:13.842052   51325 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem (1679 bytes)
	I0211 02:59:13.842089   51325 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem (1708 bytes)
	I0211 02:59:13.842733   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 02:59:13.895869   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 02:59:13.921015   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 02:59:13.956075   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0211 02:59:13.984495   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0211 02:59:14.026863   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0211 02:59:14.062399   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 02:59:14.085606   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0211 02:59:14.112158   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 02:59:14.137960   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem --> /usr/share/ca-certificates/19645.pem (1338 bytes)
	I0211 02:59:14.163222   51325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /usr/share/ca-certificates/196452.pem (1708 bytes)
	I0211 02:59:14.189839   51325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 02:59:14.208294   51325 ssh_runner.go:195] Run: openssl version
	I0211 02:59:14.213811   51325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 02:59:14.224271   51325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:59:14.228663   51325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:59:14.228734   51325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 02:59:14.234314   51325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 02:59:14.244765   51325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19645.pem && ln -fs /usr/share/ca-certificates/19645.pem /etc/ssl/certs/19645.pem"
	I0211 02:59:14.254814   51325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19645.pem
	I0211 02:59:14.259040   51325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 11 02:09 /usr/share/ca-certificates/19645.pem
	I0211 02:59:14.259107   51325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19645.pem
	I0211 02:59:14.264596   51325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19645.pem /etc/ssl/certs/51391683.0"
	I0211 02:59:14.275654   51325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/196452.pem && ln -fs /usr/share/ca-certificates/196452.pem /etc/ssl/certs/196452.pem"
	I0211 02:59:14.287124   51325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/196452.pem
	I0211 02:59:14.291734   51325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 11 02:09 /usr/share/ca-certificates/196452.pem
	I0211 02:59:14.291782   51325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/196452.pem
	I0211 02:59:14.297624   51325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/196452.pem /etc/ssl/certs/3ec20f2e.0"
	I0211 02:59:14.308389   51325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 02:59:14.313002   51325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0211 02:59:14.319153   51325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0211 02:59:14.325646   51325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0211 02:59:14.331529   51325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0211 02:59:14.336944   51325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0211 02:59:14.342269   51325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0211 02:59:14.347714   51325 kubeadm.go:392] StartCluster: {Name:test-preload-813040 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-813040 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:59:14.347794   51325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 02:59:14.347867   51325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 02:59:14.389023   51325 cri.go:89] found id: ""
	I0211 02:59:14.389104   51325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0211 02:59:14.398667   51325 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0211 02:59:14.398693   51325 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0211 02:59:14.398734   51325 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0211 02:59:14.408659   51325 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0211 02:59:14.409088   51325 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-813040" does not appear in /home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 02:59:14.409205   51325 kubeconfig.go:62] /home/jenkins/minikube-integration/20400-12456/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-813040" cluster setting kubeconfig missing "test-preload-813040" context setting]
	I0211 02:59:14.409609   51325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/kubeconfig: {Name:mkd961d61f748b29ba3bb0ad55f8216d88f98444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:59:14.410117   51325 kapi.go:59] client config for test-preload-813040: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/client.crt", KeyFile:"/home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/client.key", CAFile:"/home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24df5e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0211 02:59:14.410491   51325 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0211 02:59:14.410507   51325 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0211 02:59:14.410517   51325 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0211 02:59:14.410526   51325 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0211 02:59:14.410805   51325 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0211 02:59:14.420029   51325 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.238
	I0211 02:59:14.420061   51325 kubeadm.go:1160] stopping kube-system containers ...
	I0211 02:59:14.420072   51325 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0211 02:59:14.420127   51325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 02:59:14.452968   51325 cri.go:89] found id: ""
	I0211 02:59:14.453041   51325 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0211 02:59:14.469425   51325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 02:59:14.478924   51325 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 02:59:14.478945   51325 kubeadm.go:157] found existing configuration files:
	
	I0211 02:59:14.478994   51325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 02:59:14.487925   51325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 02:59:14.487997   51325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 02:59:14.497131   51325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 02:59:14.506043   51325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 02:59:14.506105   51325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 02:59:14.515342   51325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 02:59:14.524073   51325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 02:59:14.524140   51325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 02:59:14.533128   51325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 02:59:14.541477   51325 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 02:59:14.541533   51325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 02:59:14.549957   51325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 02:59:14.558245   51325 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0211 02:59:14.654580   51325 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0211 02:59:15.310239   51325 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0211 02:59:15.560102   51325 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0211 02:59:15.612231   51325 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0211 02:59:15.671239   51325 api_server.go:52] waiting for apiserver process to appear ...
	I0211 02:59:15.671324   51325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 02:59:16.172038   51325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 02:59:16.672054   51325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 02:59:16.701889   51325 api_server.go:72] duration metric: took 1.030648583s to wait for apiserver process to appear ...
	I0211 02:59:16.701924   51325 api_server.go:88] waiting for apiserver healthz status ...
	I0211 02:59:16.701977   51325 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0211 02:59:16.702557   51325 api_server.go:269] stopped: https://192.168.39.238:8443/healthz: Get "https://192.168.39.238:8443/healthz": dial tcp 192.168.39.238:8443: connect: connection refused
	I0211 02:59:17.202202   51325 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0211 02:59:20.913103   51325 api_server.go:279] https://192.168.39.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0211 02:59:20.913129   51325 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0211 02:59:20.913144   51325 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0211 02:59:20.938025   51325 api_server.go:279] https://192.168.39.238:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0211 02:59:20.938050   51325 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0211 02:59:21.202528   51325 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0211 02:59:21.207651   51325 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0211 02:59:21.207690   51325 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0211 02:59:21.702292   51325 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0211 02:59:21.714594   51325 api_server.go:279] https://192.168.39.238:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0211 02:59:21.714632   51325 api_server.go:103] status: https://192.168.39.238:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0211 02:59:22.202220   51325 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0211 02:59:22.207562   51325 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0211 02:59:22.214184   51325 api_server.go:141] control plane version: v1.24.4
	I0211 02:59:22.214204   51325 api_server.go:131] duration metric: took 5.512271716s to wait for apiserver health ...
	I0211 02:59:22.214212   51325 cni.go:84] Creating CNI manager for ""
	I0211 02:59:22.214218   51325 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 02:59:22.215963   51325 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0211 02:59:22.217001   51325 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0211 02:59:22.227410   51325 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0211 02:59:22.244150   51325 system_pods.go:43] waiting for kube-system pods to appear ...
	I0211 02:59:22.247316   51325 system_pods.go:59] 7 kube-system pods found
	I0211 02:59:22.247349   51325 system_pods.go:61] "coredns-6d4b75cb6d-csxl9" [37d5f114-25dc-457d-8d25-6b40bbe680b9] Running
	I0211 02:59:22.247363   51325 system_pods.go:61] "etcd-test-preload-813040" [5a6cb394-2799-46e5-997a-3fcfe3a541a6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0211 02:59:22.247376   51325 system_pods.go:61] "kube-apiserver-test-preload-813040" [0585d252-d7c0-4ebe-9b99-6a549414f38d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0211 02:59:22.247385   51325 system_pods.go:61] "kube-controller-manager-test-preload-813040" [64bd24bd-c6a1-4953-aefb-5b4873f1cf97] Running
	I0211 02:59:22.247391   51325 system_pods.go:61] "kube-proxy-zm5w7" [637a5162-7eca-45fb-80b3-8de7ba1671e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0211 02:59:22.247395   51325 system_pods.go:61] "kube-scheduler-test-preload-813040" [b15ca82c-65d3-40ae-89f5-a277c0262d3a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0211 02:59:22.247399   51325 system_pods.go:61] "storage-provisioner" [ff504d6a-f078-44f3-a313-d0e7f19889fb] Running
	I0211 02:59:22.247404   51325 system_pods.go:74] duration metric: took 3.240557ms to wait for pod list to return data ...
	I0211 02:59:22.247412   51325 node_conditions.go:102] verifying NodePressure condition ...
	I0211 02:59:22.249252   51325 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0211 02:59:22.249270   51325 node_conditions.go:123] node cpu capacity is 2
	I0211 02:59:22.249287   51325 node_conditions.go:105] duration metric: took 1.869938ms to run NodePressure ...
	I0211 02:59:22.249299   51325 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0211 02:59:22.425362   51325 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0211 02:59:22.428689   51325 kubeadm.go:739] kubelet initialised
	I0211 02:59:22.428716   51325 kubeadm.go:740] duration metric: took 3.328164ms waiting for restarted kubelet to initialise ...
	I0211 02:59:22.428726   51325 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 02:59:22.431960   51325 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-csxl9" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:22.436204   51325 pod_ready.go:98] node "test-preload-813040" hosting pod "coredns-6d4b75cb6d-csxl9" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:22.436227   51325 pod_ready.go:82] duration metric: took 4.246446ms for pod "coredns-6d4b75cb6d-csxl9" in "kube-system" namespace to be "Ready" ...
	E0211 02:59:22.436239   51325 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-813040" hosting pod "coredns-6d4b75cb6d-csxl9" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:22.436248   51325 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:22.439338   51325 pod_ready.go:98] node "test-preload-813040" hosting pod "etcd-test-preload-813040" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:22.439363   51325 pod_ready.go:82] duration metric: took 3.103523ms for pod "etcd-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	E0211 02:59:22.439375   51325 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-813040" hosting pod "etcd-test-preload-813040" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:22.439385   51325 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:22.442453   51325 pod_ready.go:98] node "test-preload-813040" hosting pod "kube-apiserver-test-preload-813040" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:22.442470   51325 pod_ready.go:82] duration metric: took 3.077173ms for pod "kube-apiserver-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	E0211 02:59:22.442478   51325 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-813040" hosting pod "kube-apiserver-test-preload-813040" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:22.442485   51325 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:22.648453   51325 pod_ready.go:98] node "test-preload-813040" hosting pod "kube-controller-manager-test-preload-813040" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:22.648484   51325 pod_ready.go:82] duration metric: took 205.984019ms for pod "kube-controller-manager-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	E0211 02:59:22.648497   51325 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-813040" hosting pod "kube-controller-manager-test-preload-813040" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:22.648505   51325 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zm5w7" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:23.047249   51325 pod_ready.go:98] node "test-preload-813040" hosting pod "kube-proxy-zm5w7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:23.047272   51325 pod_ready.go:82] duration metric: took 398.755299ms for pod "kube-proxy-zm5w7" in "kube-system" namespace to be "Ready" ...
	E0211 02:59:23.047280   51325 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-813040" hosting pod "kube-proxy-zm5w7" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:23.047286   51325 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:23.448877   51325 pod_ready.go:98] node "test-preload-813040" hosting pod "kube-scheduler-test-preload-813040" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:23.448908   51325 pod_ready.go:82] duration metric: took 401.614141ms for pod "kube-scheduler-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	E0211 02:59:23.448920   51325 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-813040" hosting pod "kube-scheduler-test-preload-813040" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:23.448929   51325 pod_ready.go:39] duration metric: took 1.020192747s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 02:59:23.448955   51325 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0211 02:59:23.460586   51325 ops.go:34] apiserver oom_adj: -16
	I0211 02:59:23.460613   51325 kubeadm.go:597] duration metric: took 9.06191292s to restartPrimaryControlPlane
	I0211 02:59:23.460625   51325 kubeadm.go:394] duration metric: took 9.112914235s to StartCluster
	I0211 02:59:23.460659   51325 settings.go:142] acquiring lock: {Name:mkf2645a714cc5873c434b18e1494d4128c48052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:59:23.460747   51325 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 02:59:23.461667   51325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/kubeconfig: {Name:mkd961d61f748b29ba3bb0ad55f8216d88f98444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:59:23.461971   51325 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 02:59:23.462039   51325 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0211 02:59:23.462126   51325 addons.go:69] Setting storage-provisioner=true in profile "test-preload-813040"
	I0211 02:59:23.462143   51325 addons.go:69] Setting default-storageclass=true in profile "test-preload-813040"
	I0211 02:59:23.462163   51325 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-813040"
	I0211 02:59:23.462203   51325 config.go:182] Loaded profile config "test-preload-813040": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0211 02:59:23.462149   51325 addons.go:238] Setting addon storage-provisioner=true in "test-preload-813040"
	W0211 02:59:23.462244   51325 addons.go:247] addon storage-provisioner should already be in state true
	I0211 02:59:23.462275   51325 host.go:66] Checking if "test-preload-813040" exists ...
	I0211 02:59:23.462664   51325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:59:23.462664   51325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:59:23.462706   51325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:59:23.462712   51325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:59:23.464502   51325 out.go:177] * Verifying Kubernetes components...
	I0211 02:59:23.465914   51325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 02:59:23.477504   51325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41735
	I0211 02:59:23.477902   51325 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:59:23.478402   51325 main.go:141] libmachine: Using API Version  1
	I0211 02:59:23.478421   51325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:59:23.478694   51325 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:59:23.479213   51325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:59:23.479254   51325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:59:23.481791   51325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37333
	I0211 02:59:23.482255   51325 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:59:23.482703   51325 main.go:141] libmachine: Using API Version  1
	I0211 02:59:23.482725   51325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:59:23.483089   51325 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:59:23.483284   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetState
	I0211 02:59:23.485461   51325 kapi.go:59] client config for test-preload-813040: &rest.Config{Host:"https://192.168.39.238:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/client.crt", KeyFile:"/home/jenkins/minikube-integration/20400-12456/.minikube/profiles/test-preload-813040/client.key", CAFile:"/home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x24df5e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0211 02:59:23.485827   51325 addons.go:238] Setting addon default-storageclass=true in "test-preload-813040"
	W0211 02:59:23.485852   51325 addons.go:247] addon default-storageclass should already be in state true
	I0211 02:59:23.485881   51325 host.go:66] Checking if "test-preload-813040" exists ...
	I0211 02:59:23.486234   51325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:59:23.486276   51325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:59:23.494867   51325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34191
	I0211 02:59:23.495280   51325 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:59:23.495730   51325 main.go:141] libmachine: Using API Version  1
	I0211 02:59:23.495754   51325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:59:23.496072   51325 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:59:23.496326   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetState
	I0211 02:59:23.497825   51325 main.go:141] libmachine: (test-preload-813040) Calling .DriverName
	I0211 02:59:23.499725   51325 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 02:59:23.501010   51325 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 02:59:23.501034   51325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0211 02:59:23.501053   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHHostname
	I0211 02:59:23.501109   51325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
	I0211 02:59:23.501473   51325 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:59:23.502001   51325 main.go:141] libmachine: Using API Version  1
	I0211 02:59:23.502028   51325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:59:23.502403   51325 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:59:23.503022   51325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:59:23.503066   51325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:59:23.504260   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:59:23.504714   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:59:23.504741   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:59:23.504856   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHPort
	I0211 02:59:23.505033   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:59:23.505171   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHUsername
	I0211 02:59:23.505305   51325 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/test-preload-813040/id_rsa Username:docker}
	I0211 02:59:23.541118   51325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41963
	I0211 02:59:23.541499   51325 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:59:23.541988   51325 main.go:141] libmachine: Using API Version  1
	I0211 02:59:23.542007   51325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:59:23.542288   51325 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:59:23.542500   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetState
	I0211 02:59:23.543973   51325 main.go:141] libmachine: (test-preload-813040) Calling .DriverName
	I0211 02:59:23.544207   51325 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0211 02:59:23.544225   51325 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0211 02:59:23.544244   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHHostname
	I0211 02:59:23.547384   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:59:23.547816   51325 main.go:141] libmachine: (test-preload-813040) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e3:5d:00", ip: ""} in network mk-test-preload-813040: {Iface:virbr1 ExpiryTime:2025-02-11 03:58:51 +0000 UTC Type:0 Mac:52:54:00:e3:5d:00 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:test-preload-813040 Clientid:01:52:54:00:e3:5d:00}
	I0211 02:59:23.547852   51325 main.go:141] libmachine: (test-preload-813040) DBG | domain test-preload-813040 has defined IP address 192.168.39.238 and MAC address 52:54:00:e3:5d:00 in network mk-test-preload-813040
	I0211 02:59:23.548054   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHPort
	I0211 02:59:23.548267   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHKeyPath
	I0211 02:59:23.548404   51325 main.go:141] libmachine: (test-preload-813040) Calling .GetSSHUsername
	I0211 02:59:23.548538   51325 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/test-preload-813040/id_rsa Username:docker}
	I0211 02:59:23.647878   51325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 02:59:23.662937   51325 node_ready.go:35] waiting up to 6m0s for node "test-preload-813040" to be "Ready" ...
	I0211 02:59:23.753872   51325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 02:59:23.766007   51325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0211 02:59:24.721038   51325 main.go:141] libmachine: Making call to close driver server
	I0211 02:59:24.721066   51325 main.go:141] libmachine: (test-preload-813040) Calling .Close
	I0211 02:59:24.721133   51325 main.go:141] libmachine: Making call to close driver server
	I0211 02:59:24.721156   51325 main.go:141] libmachine: (test-preload-813040) Calling .Close
	I0211 02:59:24.721386   51325 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:59:24.721403   51325 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:59:24.721412   51325 main.go:141] libmachine: Making call to close driver server
	I0211 02:59:24.721418   51325 main.go:141] libmachine: (test-preload-813040) Calling .Close
	I0211 02:59:24.721439   51325 main.go:141] libmachine: (test-preload-813040) DBG | Closing plugin on server side
	I0211 02:59:24.721486   51325 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:59:24.721496   51325 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:59:24.721504   51325 main.go:141] libmachine: Making call to close driver server
	I0211 02:59:24.721512   51325 main.go:141] libmachine: (test-preload-813040) Calling .Close
	I0211 02:59:24.721635   51325 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:59:24.721651   51325 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:59:24.721765   51325 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:59:24.721777   51325 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:59:24.721785   51325 main.go:141] libmachine: (test-preload-813040) DBG | Closing plugin on server side
	I0211 02:59:24.726131   51325 main.go:141] libmachine: Making call to close driver server
	I0211 02:59:24.726146   51325 main.go:141] libmachine: (test-preload-813040) Calling .Close
	I0211 02:59:24.726369   51325 main.go:141] libmachine: Successfully made call to close driver server
	I0211 02:59:24.726383   51325 main.go:141] libmachine: (test-preload-813040) DBG | Closing plugin on server side
	I0211 02:59:24.726385   51325 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 02:59:24.728197   51325 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0211 02:59:24.729431   51325 addons.go:514] duration metric: took 1.267423846s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0211 02:59:25.668274   51325 node_ready.go:53] node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:28.166550   51325 node_ready.go:53] node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:30.166863   51325 node_ready.go:53] node "test-preload-813040" has status "Ready":"False"
	I0211 02:59:31.666625   51325 node_ready.go:49] node "test-preload-813040" has status "Ready":"True"
	I0211 02:59:31.666650   51325 node_ready.go:38] duration metric: took 8.003678537s for node "test-preload-813040" to be "Ready" ...
	I0211 02:59:31.666659   51325 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 02:59:31.669637   51325 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-csxl9" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:31.672968   51325 pod_ready.go:93] pod "coredns-6d4b75cb6d-csxl9" in "kube-system" namespace has status "Ready":"True"
	I0211 02:59:31.672982   51325 pod_ready.go:82] duration metric: took 3.323714ms for pod "coredns-6d4b75cb6d-csxl9" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:31.672993   51325 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:32.178212   51325 pod_ready.go:93] pod "etcd-test-preload-813040" in "kube-system" namespace has status "Ready":"True"
	I0211 02:59:32.178236   51325 pod_ready.go:82] duration metric: took 505.237257ms for pod "etcd-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:32.178245   51325 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:32.182192   51325 pod_ready.go:93] pod "kube-apiserver-test-preload-813040" in "kube-system" namespace has status "Ready":"True"
	I0211 02:59:32.182210   51325 pod_ready.go:82] duration metric: took 3.959832ms for pod "kube-apiserver-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:32.182219   51325 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:33.187597   51325 pod_ready.go:93] pod "kube-controller-manager-test-preload-813040" in "kube-system" namespace has status "Ready":"True"
	I0211 02:59:33.187621   51325 pod_ready.go:82] duration metric: took 1.005395727s for pod "kube-controller-manager-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:33.187637   51325 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zm5w7" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:33.267603   51325 pod_ready.go:93] pod "kube-proxy-zm5w7" in "kube-system" namespace has status "Ready":"True"
	I0211 02:59:33.267626   51325 pod_ready.go:82] duration metric: took 79.98376ms for pod "kube-proxy-zm5w7" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:33.267634   51325 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:35.273292   51325 pod_ready.go:103] pod "kube-scheduler-test-preload-813040" in "kube-system" namespace has status "Ready":"False"
	I0211 02:59:36.273080   51325 pod_ready.go:93] pod "kube-scheduler-test-preload-813040" in "kube-system" namespace has status "Ready":"True"
	I0211 02:59:36.273103   51325 pod_ready.go:82] duration metric: took 3.005462165s for pod "kube-scheduler-test-preload-813040" in "kube-system" namespace to be "Ready" ...
	I0211 02:59:36.273112   51325 pod_ready.go:39] duration metric: took 4.606442055s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 02:59:36.273129   51325 api_server.go:52] waiting for apiserver process to appear ...
	I0211 02:59:36.273192   51325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 02:59:36.287365   51325 api_server.go:72] duration metric: took 12.825348977s to wait for apiserver process to appear ...
	I0211 02:59:36.287388   51325 api_server.go:88] waiting for apiserver healthz status ...
	I0211 02:59:36.287407   51325 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0211 02:59:36.292217   51325 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0211 02:59:36.293133   51325 api_server.go:141] control plane version: v1.24.4
	I0211 02:59:36.293151   51325 api_server.go:131] duration metric: took 5.756858ms to wait for apiserver health ...
	I0211 02:59:36.293159   51325 system_pods.go:43] waiting for kube-system pods to appear ...
	I0211 02:59:36.296495   51325 system_pods.go:59] 7 kube-system pods found
	I0211 02:59:36.296514   51325 system_pods.go:61] "coredns-6d4b75cb6d-csxl9" [37d5f114-25dc-457d-8d25-6b40bbe680b9] Running
	I0211 02:59:36.296519   51325 system_pods.go:61] "etcd-test-preload-813040" [5a6cb394-2799-46e5-997a-3fcfe3a541a6] Running
	I0211 02:59:36.296523   51325 system_pods.go:61] "kube-apiserver-test-preload-813040" [0585d252-d7c0-4ebe-9b99-6a549414f38d] Running
	I0211 02:59:36.296528   51325 system_pods.go:61] "kube-controller-manager-test-preload-813040" [64bd24bd-c6a1-4953-aefb-5b4873f1cf97] Running
	I0211 02:59:36.296533   51325 system_pods.go:61] "kube-proxy-zm5w7" [637a5162-7eca-45fb-80b3-8de7ba1671e9] Running
	I0211 02:59:36.296537   51325 system_pods.go:61] "kube-scheduler-test-preload-813040" [b15ca82c-65d3-40ae-89f5-a277c0262d3a] Running
	I0211 02:59:36.296545   51325 system_pods.go:61] "storage-provisioner" [ff504d6a-f078-44f3-a313-d0e7f19889fb] Running
	I0211 02:59:36.296552   51325 system_pods.go:74] duration metric: took 3.387523ms to wait for pod list to return data ...
	I0211 02:59:36.296564   51325 default_sa.go:34] waiting for default service account to be created ...
	I0211 02:59:36.467453   51325 default_sa.go:45] found service account: "default"
	I0211 02:59:36.467475   51325 default_sa.go:55] duration metric: took 170.90533ms for default service account to be created ...
	I0211 02:59:36.467483   51325 system_pods.go:116] waiting for k8s-apps to be running ...
	I0211 02:59:36.668392   51325 system_pods.go:86] 7 kube-system pods found
	I0211 02:59:36.668419   51325 system_pods.go:89] "coredns-6d4b75cb6d-csxl9" [37d5f114-25dc-457d-8d25-6b40bbe680b9] Running
	I0211 02:59:36.668425   51325 system_pods.go:89] "etcd-test-preload-813040" [5a6cb394-2799-46e5-997a-3fcfe3a541a6] Running
	I0211 02:59:36.668429   51325 system_pods.go:89] "kube-apiserver-test-preload-813040" [0585d252-d7c0-4ebe-9b99-6a549414f38d] Running
	I0211 02:59:36.668432   51325 system_pods.go:89] "kube-controller-manager-test-preload-813040" [64bd24bd-c6a1-4953-aefb-5b4873f1cf97] Running
	I0211 02:59:36.668435   51325 system_pods.go:89] "kube-proxy-zm5w7" [637a5162-7eca-45fb-80b3-8de7ba1671e9] Running
	I0211 02:59:36.668438   51325 system_pods.go:89] "kube-scheduler-test-preload-813040" [b15ca82c-65d3-40ae-89f5-a277c0262d3a] Running
	I0211 02:59:36.668441   51325 system_pods.go:89] "storage-provisioner" [ff504d6a-f078-44f3-a313-d0e7f19889fb] Running
	I0211 02:59:36.668449   51325 system_pods.go:126] duration metric: took 200.960045ms to wait for k8s-apps to be running ...
	I0211 02:59:36.668458   51325 system_svc.go:44] waiting for kubelet service to be running ....
	I0211 02:59:36.668509   51325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:59:36.684104   51325 system_svc.go:56] duration metric: took 15.641417ms WaitForService to wait for kubelet
	I0211 02:59:36.684143   51325 kubeadm.go:582] duration metric: took 13.222116351s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 02:59:36.684162   51325 node_conditions.go:102] verifying NodePressure condition ...
	I0211 02:59:36.866835   51325 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0211 02:59:36.866867   51325 node_conditions.go:123] node cpu capacity is 2
	I0211 02:59:36.866907   51325 node_conditions.go:105] duration metric: took 182.734414ms to run NodePressure ...
	I0211 02:59:36.866922   51325 start.go:241] waiting for startup goroutines ...
	I0211 02:59:36.866932   51325 start.go:246] waiting for cluster config update ...
	I0211 02:59:36.866952   51325 start.go:255] writing updated cluster config ...
	I0211 02:59:36.867260   51325 ssh_runner.go:195] Run: rm -f paused
	I0211 02:59:36.912994   51325 start.go:600] kubectl: 1.32.1, cluster: 1.24.4 (minor skew: 8)
	I0211 02:59:36.914936   51325 out.go:201] 
	W0211 02:59:36.916395   51325 out.go:270] ! /usr/local/bin/kubectl is version 1.32.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0211 02:59:36.917628   51325 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0211 02:59:36.918791   51325 out.go:177] * Done! kubectl is now configured to use "test-preload-813040" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.772300574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa42a067-92cf-48bf-8b61-f845d2a79049 name=/runtime.v1.RuntimeService/Version
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.773123662Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3809c78e-e383-4fa7-865f-4fac559f35d0 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.773557863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739242777773538962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3809c78e-e383-4fa7-865f-4fac559f35d0 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.774042614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a698ad8-b75c-4be1-a63c-5a11bbb437f3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.774086746Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a698ad8-b75c-4be1-a63c-5a11bbb437f3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.774301934Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2030048ed8c4e182933ee5d93d542751d73a81af126bd5151ca5f67cd87de02,PodSandboxId:6ef6f48fd64af797164ac63e29c740cee550cda774a1801329710a1e032f512d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739242769655570738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-csxl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d5f114-25dc-457d-8d25-6b40bbe680b9,},Annotations:map[string]string{io.kubernetes.container.hash: d232a19d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5d4fbe762149c2dc3094d64d1488be325cea58fd58ae1d36a71590f9d94856,PodSandboxId:00a24e1f443c91de4132e9409a785fe466cbca64a04a66c2dd883895edf13f43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739242762698044273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ff504d6a-f078-44f3-a313-d0e7f19889fb,},Annotations:map[string]string{io.kubernetes.container.hash: c7888b6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d700edf5d221d642e21d9f9e3c16d915778e5d696e18e44696218a811fed8fc5,PodSandboxId:d410b44edcce7631ecfffcf3f07febaf01b556cc7c24c3b168ea99330ca68c43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739242762653003491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zm5w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63
7a5162-7eca-45fb-80b3-8de7ba1671e9,},Annotations:map[string]string{io.kubernetes.container.hash: 464d0be4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13f044a0b34b4bbe4a86632bb6d125ae8cc5d36abb0ee155788249cfc6381f10,PodSandboxId:68b9ce5370ca3bc2faee50d929e003142295206a9fbc711f0ea77f1102301be8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739242756390493528,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8062004d12ae27ab76ea414fe50c0d5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8d2f62f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b5a5a7ab26f980a609f0767218edbcebb3b166697895c3f62ce709d669a9899,PodSandboxId:19a74da6e5b3d4aa22b71f7806b81f9ca42bd22c500e1cfc297ff2ce90991f5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739242756355899293,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c604e644d8c9d892f072d1c97ba3621,},Annotations:map
[string]string{io.kubernetes.container.hash: a57902b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300aaeda75a053e40273596a1f233ab1a6fb0f63c5c9a46f5d50ac8a288442b0,PodSandboxId:5c186ce6bb95314c625448e68d1d357c6556f15e21ceeef9a9bd79a9680a68e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739242756404333922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be570644b13263039d98b1cbb9fd47,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab71d50a6bac8c68b5940bfa99376771313a7f8aa144d0087475381521018e0,PodSandboxId:1acf068ae88cf5aeedb174138aa1c5643ff58834fe54871cb04040d1f274f2b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739242756318201327,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e407fab3108c4ba121efc722b88ead,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a698ad8-b75c-4be1-a63c-5a11bbb437f3 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.806580032Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8eceee2-acbd-4243-af2d-43e7363abacf name=/runtime.v1.RuntimeService/Version
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.806645207Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8eceee2-acbd-4243-af2d-43e7363abacf name=/runtime.v1.RuntimeService/Version
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.807518550Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f51a04ee-79b5-4050-87d3-50036ade75c5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.808131330Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739242777807903512,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f51a04ee-79b5-4050-87d3-50036ade75c5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.808603999Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fa9a6bf-7812-4541-a0c6-022eb47befdb name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.808647866Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fa9a6bf-7812-4541-a0c6-022eb47befdb name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.808790731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2030048ed8c4e182933ee5d93d542751d73a81af126bd5151ca5f67cd87de02,PodSandboxId:6ef6f48fd64af797164ac63e29c740cee550cda774a1801329710a1e032f512d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739242769655570738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-csxl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d5f114-25dc-457d-8d25-6b40bbe680b9,},Annotations:map[string]string{io.kubernetes.container.hash: d232a19d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5d4fbe762149c2dc3094d64d1488be325cea58fd58ae1d36a71590f9d94856,PodSandboxId:00a24e1f443c91de4132e9409a785fe466cbca64a04a66c2dd883895edf13f43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739242762698044273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ff504d6a-f078-44f3-a313-d0e7f19889fb,},Annotations:map[string]string{io.kubernetes.container.hash: c7888b6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d700edf5d221d642e21d9f9e3c16d915778e5d696e18e44696218a811fed8fc5,PodSandboxId:d410b44edcce7631ecfffcf3f07febaf01b556cc7c24c3b168ea99330ca68c43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739242762653003491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zm5w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63
7a5162-7eca-45fb-80b3-8de7ba1671e9,},Annotations:map[string]string{io.kubernetes.container.hash: 464d0be4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13f044a0b34b4bbe4a86632bb6d125ae8cc5d36abb0ee155788249cfc6381f10,PodSandboxId:68b9ce5370ca3bc2faee50d929e003142295206a9fbc711f0ea77f1102301be8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739242756390493528,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8062004d12ae27ab76ea414fe50c0d5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8d2f62f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b5a5a7ab26f980a609f0767218edbcebb3b166697895c3f62ce709d669a9899,PodSandboxId:19a74da6e5b3d4aa22b71f7806b81f9ca42bd22c500e1cfc297ff2ce90991f5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739242756355899293,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c604e644d8c9d892f072d1c97ba3621,},Annotations:map
[string]string{io.kubernetes.container.hash: a57902b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300aaeda75a053e40273596a1f233ab1a6fb0f63c5c9a46f5d50ac8a288442b0,PodSandboxId:5c186ce6bb95314c625448e68d1d357c6556f15e21ceeef9a9bd79a9680a68e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739242756404333922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be570644b13263039d98b1cbb9fd47,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab71d50a6bac8c68b5940bfa99376771313a7f8aa144d0087475381521018e0,PodSandboxId:1acf068ae88cf5aeedb174138aa1c5643ff58834fe54871cb04040d1f274f2b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739242756318201327,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e407fab3108c4ba121efc722b88ead,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fa9a6bf-7812-4541-a0c6-022eb47befdb name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.831690182Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=2778b621-7e3a-430a-9709-81c3e4d01aab name=/runtime.v1.RuntimeService/ListPodSandbox
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.832008365Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6ef6f48fd64af797164ac63e29c740cee550cda774a1801329710a1e032f512d,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-csxl9,Uid:37d5f114-25dc-457d-8d25-6b40bbe680b9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739242769457082350,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-csxl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d5f114-25dc-457d-8d25-6b40bbe680b9,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:59:21.650558725Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d410b44edcce7631ecfffcf3f07febaf01b556cc7c24c3b168ea99330ca68c43,Metadata:&PodSandboxMetadata{Name:kube-proxy-zm5w7,Uid:637a5162-7eca-45fb-80b3-8de7ba1671e9,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1739242762563424512,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zm5w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 637a5162-7eca-45fb-80b3-8de7ba1671e9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-02-11T02:59:21.650577322Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00a24e1f443c91de4132e9409a785fe466cbca64a04a66c2dd883895edf13f43,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ff504d6a-f078-44f3-a313-d0e7f19889fb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739242762561049014,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff504d6a-f078-44f3-a313-d0e7
f19889fb,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-02-11T02:59:21.650579379Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19a74da6e5b3d4aa22b71f7806b81f9ca42bd22c500e1cfc297ff2ce90991f5a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-813040,Uid:5c604e6
44d8c9d892f072d1c97ba3621,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739242756172493674,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c604e644d8c9d892f072d1c97ba3621,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.238:8443,kubernetes.io/config.hash: 5c604e644d8c9d892f072d1c97ba3621,kubernetes.io/config.seen: 2025-02-11T02:59:15.673869525Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5c186ce6bb95314c625448e68d1d357c6556f15e21ceeef9a9bd79a9680a68e6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-813040,Uid:c7be570644b13263039d98b1cbb9fd47,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739242756170587578,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubern
etes.pod.name: kube-scheduler-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be570644b13263039d98b1cbb9fd47,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c7be570644b13263039d98b1cbb9fd47,kubernetes.io/config.seen: 2025-02-11T02:59:15.673871724Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:68b9ce5370ca3bc2faee50d929e003142295206a9fbc711f0ea77f1102301be8,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-813040,Uid:e8062004d12ae27ab76ea414fe50c0d5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739242756169315196,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8062004d12ae27ab76ea414fe50c0d5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.238:2379,kubernetes.io/config.hash: e8062004d12ae27
ab76ea414fe50c0d5,kubernetes.io/config.seen: 2025-02-11T02:59:15.673840136Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1acf068ae88cf5aeedb174138aa1c5643ff58834fe54871cb04040d1f274f2b4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-813040,Uid:e6e407fab3108c4ba121efc722b88ead,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1739242756168738128,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e407fab3108c4ba121efc722b88ead,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e6e407fab3108c4ba121efc722b88ead,kubernetes.io/config.seen: 2025-02-11T02:59:15.673870882Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=2778b621-7e3a-430a-9709-81c3e4d01aab name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.832634103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a66a187e-91be-42d9-bf9c-39a57f07de0e name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.832681926Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a66a187e-91be-42d9-bf9c-39a57f07de0e name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.832825846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2030048ed8c4e182933ee5d93d542751d73a81af126bd5151ca5f67cd87de02,PodSandboxId:6ef6f48fd64af797164ac63e29c740cee550cda774a1801329710a1e032f512d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739242769655570738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-csxl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d5f114-25dc-457d-8d25-6b40bbe680b9,},Annotations:map[string]string{io.kubernetes.container.hash: d232a19d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5d4fbe762149c2dc3094d64d1488be325cea58fd58ae1d36a71590f9d94856,PodSandboxId:00a24e1f443c91de4132e9409a785fe466cbca64a04a66c2dd883895edf13f43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739242762698044273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ff504d6a-f078-44f3-a313-d0e7f19889fb,},Annotations:map[string]string{io.kubernetes.container.hash: c7888b6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d700edf5d221d642e21d9f9e3c16d915778e5d696e18e44696218a811fed8fc5,PodSandboxId:d410b44edcce7631ecfffcf3f07febaf01b556cc7c24c3b168ea99330ca68c43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739242762653003491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zm5w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63
7a5162-7eca-45fb-80b3-8de7ba1671e9,},Annotations:map[string]string{io.kubernetes.container.hash: 464d0be4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13f044a0b34b4bbe4a86632bb6d125ae8cc5d36abb0ee155788249cfc6381f10,PodSandboxId:68b9ce5370ca3bc2faee50d929e003142295206a9fbc711f0ea77f1102301be8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739242756390493528,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8062004d12ae27ab76ea414fe50c0d5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8d2f62f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b5a5a7ab26f980a609f0767218edbcebb3b166697895c3f62ce709d669a9899,PodSandboxId:19a74da6e5b3d4aa22b71f7806b81f9ca42bd22c500e1cfc297ff2ce90991f5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739242756355899293,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c604e644d8c9d892f072d1c97ba3621,},Annotations:map
[string]string{io.kubernetes.container.hash: a57902b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300aaeda75a053e40273596a1f233ab1a6fb0f63c5c9a46f5d50ac8a288442b0,PodSandboxId:5c186ce6bb95314c625448e68d1d357c6556f15e21ceeef9a9bd79a9680a68e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739242756404333922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be570644b13263039d98b1cbb9fd47,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab71d50a6bac8c68b5940bfa99376771313a7f8aa144d0087475381521018e0,PodSandboxId:1acf068ae88cf5aeedb174138aa1c5643ff58834fe54871cb04040d1f274f2b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739242756318201327,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e407fab3108c4ba121efc722b88ead,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a66a187e-91be-42d9-bf9c-39a57f07de0e name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.839581624Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1c8065e-ffad-4b80-bcd3-e0358f0202ab name=/runtime.v1.RuntimeService/Version
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.839631984Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1c8065e-ffad-4b80-bcd3-e0358f0202ab name=/runtime.v1.RuntimeService/Version
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.840542309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2043263f-75dc-4fff-942f-26c6a770309b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.840949930Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739242777840931095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2043263f-75dc-4fff-942f-26c6a770309b name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.841489925Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65de2e69-9690-405c-be3b-d72eba454254 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.841536242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65de2e69-9690-405c-be3b-d72eba454254 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 02:59:37 test-preload-813040 crio[667]: time="2025-02-11 02:59:37.841677372Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f2030048ed8c4e182933ee5d93d542751d73a81af126bd5151ca5f67cd87de02,PodSandboxId:6ef6f48fd64af797164ac63e29c740cee550cda774a1801329710a1e032f512d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1739242769655570738,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-csxl9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d5f114-25dc-457d-8d25-6b40bbe680b9,},Annotations:map[string]string{io.kubernetes.container.hash: d232a19d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5d4fbe762149c2dc3094d64d1488be325cea58fd58ae1d36a71590f9d94856,PodSandboxId:00a24e1f443c91de4132e9409a785fe466cbca64a04a66c2dd883895edf13f43,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1739242762698044273,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: ff504d6a-f078-44f3-a313-d0e7f19889fb,},Annotations:map[string]string{io.kubernetes.container.hash: c7888b6f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d700edf5d221d642e21d9f9e3c16d915778e5d696e18e44696218a811fed8fc5,PodSandboxId:d410b44edcce7631ecfffcf3f07febaf01b556cc7c24c3b168ea99330ca68c43,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1739242762653003491,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zm5w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63
7a5162-7eca-45fb-80b3-8de7ba1671e9,},Annotations:map[string]string{io.kubernetes.container.hash: 464d0be4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13f044a0b34b4bbe4a86632bb6d125ae8cc5d36abb0ee155788249cfc6381f10,PodSandboxId:68b9ce5370ca3bc2faee50d929e003142295206a9fbc711f0ea77f1102301be8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1739242756390493528,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8062004d12ae27ab76ea414fe50c0d5,},Anno
tations:map[string]string{io.kubernetes.container.hash: 8d2f62f6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8b5a5a7ab26f980a609f0767218edbcebb3b166697895c3f62ce709d669a9899,PodSandboxId:19a74da6e5b3d4aa22b71f7806b81f9ca42bd22c500e1cfc297ff2ce90991f5a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1739242756355899293,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c604e644d8c9d892f072d1c97ba3621,},Annotations:map
[string]string{io.kubernetes.container.hash: a57902b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:300aaeda75a053e40273596a1f233ab1a6fb0f63c5c9a46f5d50ac8a288442b0,PodSandboxId:5c186ce6bb95314c625448e68d1d357c6556f15e21ceeef9a9bd79a9680a68e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1739242756404333922,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7be570644b13263039d98b1cbb9fd47,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eab71d50a6bac8c68b5940bfa99376771313a7f8aa144d0087475381521018e0,PodSandboxId:1acf068ae88cf5aeedb174138aa1c5643ff58834fe54871cb04040d1f274f2b4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1739242756318201327,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-813040,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e407fab3108c4ba121efc722b88ead,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65de2e69-9690-405c-be3b-d72eba454254 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f2030048ed8c4       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   6ef6f48fd64af       coredns-6d4b75cb6d-csxl9
	7b5d4fbe76214       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   00a24e1f443c9       storage-provisioner
	d700edf5d221d       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   d410b44edcce7       kube-proxy-zm5w7
	300aaeda75a05       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   5c186ce6bb953       kube-scheduler-test-preload-813040
	13f044a0b34b4       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   68b9ce5370ca3       etcd-test-preload-813040
	8b5a5a7ab26f9       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   19a74da6e5b3d       kube-apiserver-test-preload-813040
	eab71d50a6bac       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   1acf068ae88cf       kube-controller-manager-test-preload-813040
	
	
	==> coredns [f2030048ed8c4e182933ee5d93d542751d73a81af126bd5151ca5f67cd87de02] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:49381 - 39717 "HINFO IN 4697034742195009588.4352330519435783596. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.050971246s
	
	
	==> describe nodes <==
	Name:               test-preload-813040
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-813040
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321
	                    minikube.k8s.io/name=test-preload-813040
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_11T02_56_06_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 11 Feb 2025 02:56:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-813040
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 11 Feb 2025 02:59:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 11 Feb 2025 02:59:31 +0000   Tue, 11 Feb 2025 02:56:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 11 Feb 2025 02:59:31 +0000   Tue, 11 Feb 2025 02:56:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 11 Feb 2025 02:59:31 +0000   Tue, 11 Feb 2025 02:56:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 11 Feb 2025 02:59:31 +0000   Tue, 11 Feb 2025 02:59:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    test-preload-813040
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 60983325afec405885a310e0deb527b2
	  System UUID:                60983325-afec-4058-85a3-10e0deb527b2
	  Boot ID:                    e3f35ca2-71ac-4c31-a4aa-7bf5dd412567
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-csxl9                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m19s
	  kube-system                 etcd-test-preload-813040                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m32s
	  kube-system                 kube-apiserver-test-preload-813040             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kube-controller-manager-test-preload-813040    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 kube-proxy-zm5w7                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	  kube-system                 kube-scheduler-test-preload-813040             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m32s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15s                    kube-proxy       
	  Normal  Starting                 3m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m39s (x5 over 3m40s)  kubelet          Node test-preload-813040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s (x4 over 3m40s)  kubelet          Node test-preload-813040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s (x4 over 3m40s)  kubelet          Node test-preload-813040 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m32s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m32s                  kubelet          Node test-preload-813040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m32s                  kubelet          Node test-preload-813040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m32s                  kubelet          Node test-preload-813040 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m21s                  kubelet          Node test-preload-813040 status is now: NodeReady
	  Normal  RegisteredNode           3m20s                  node-controller  Node test-preload-813040 event: Registered Node test-preload-813040 in Controller
	  Normal  Starting                 23s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)      kubelet          Node test-preload-813040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)      kubelet          Node test-preload-813040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)      kubelet          Node test-preload-813040 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                     node-controller  Node test-preload-813040 event: Registered Node test-preload-813040 in Controller
	
	
	==> dmesg <==
	[Feb11 02:58] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052747] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037532] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.844674] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.919007] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.587302] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb11 02:59] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.056558] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067533] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.170428] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.124311] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.262903] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[ +12.611857] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.063730] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.670438] systemd-fstab-generator[1115]: Ignoring "noauto" option for root device
	[  +6.631029] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.433686] systemd-fstab-generator[1763]: Ignoring "noauto" option for root device
	[  +5.956215] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [13f044a0b34b4bbe4a86632bb6d125ae8cc5d36abb0ee155788249cfc6381f10] <==
	{"level":"info","ts":"2025-02-11T02:59:16.865Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"fff3906243738b90","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-02-11T02:59:16.868Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-02-11T02:59:16.890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 switched to configuration voters=(18443243650725153680)"}
	{"level":"info","ts":"2025-02-11T02:59:16.890Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3658928c14b8a733","local-member-id":"fff3906243738b90","added-peer-id":"fff3906243738b90","added-peer-peer-urls":["https://192.168.39.238:2380"]}
	{"level":"info","ts":"2025-02-11T02:59:16.890Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3658928c14b8a733","local-member-id":"fff3906243738b90","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-11T02:59:16.890Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-11T02:59:16.894Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-02-11T02:59:16.895Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fff3906243738b90","initial-advertise-peer-urls":["https://192.168.39.238:2380"],"listen-peer-urls":["https://192.168.39.238:2380"],"advertise-client-urls":["https://192.168.39.238:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.238:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-11T02:59:16.895Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-11T02:59:16.895Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2025-02-11T02:59:16.895Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2025-02-11T02:59:18.519Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-11T02:59:18.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-11T02:59:18.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgPreVoteResp from fff3906243738b90 at term 2"}
	{"level":"info","ts":"2025-02-11T02:59:18.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became candidate at term 3"}
	{"level":"info","ts":"2025-02-11T02:59:18.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 received MsgVoteResp from fff3906243738b90 at term 3"}
	{"level":"info","ts":"2025-02-11T02:59:18.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 became leader at term 3"}
	{"level":"info","ts":"2025-02-11T02:59:18.520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fff3906243738b90 elected leader fff3906243738b90 at term 3"}
	{"level":"info","ts":"2025-02-11T02:59:18.520Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"fff3906243738b90","local-member-attributes":"{Name:test-preload-813040 ClientURLs:[https://192.168.39.238:2379]}","request-path":"/0/members/fff3906243738b90/attributes","cluster-id":"3658928c14b8a733","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-11T02:59:18.521Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-11T02:59:18.522Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-11T02:59:18.523Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.238:2379"}
	{"level":"info","ts":"2025-02-11T02:59:18.523Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-11T02:59:18.523Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-11T02:59:18.523Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 02:59:38 up 0 min,  0 users,  load average: 0.54, 0.15, 0.05
	Linux test-preload-813040 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8b5a5a7ab26f980a609f0767218edbcebb3b166697895c3f62ce709d669a9899] <==
	I0211 02:59:20.849437       1 establishing_controller.go:76] Starting EstablishingController
	I0211 02:59:20.849467       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0211 02:59:20.849552       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0211 02:59:20.849600       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0211 02:59:20.838299       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0211 02:59:20.896293       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0211 02:59:20.916318       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0211 02:59:20.939289       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0211 02:59:20.939826       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0211 02:59:20.944892       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0211 02:59:20.946978       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0211 02:59:20.947302       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0211 02:59:20.969814       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0211 02:59:21.008365       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0211 02:59:21.026029       1 cache.go:39] Caches are synced for autoregister controller
	I0211 02:59:21.520862       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0211 02:59:21.852490       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0211 02:59:22.320032       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0211 02:59:22.332907       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0211 02:59:22.365350       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0211 02:59:22.379068       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0211 02:59:22.386048       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0211 02:59:22.978292       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0211 02:59:34.064860       1 controller.go:611] quota admission added evaluator for: endpoints
	I0211 02:59:34.269126       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [eab71d50a6bac8c68b5940bfa99376771313a7f8aa144d0087475381521018e0] <==
	I0211 02:59:34.031657       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0211 02:59:34.036574       1 shared_informer.go:262] Caches are synced for job
	I0211 02:59:34.051227       1 shared_informer.go:262] Caches are synced for endpoint
	I0211 02:59:34.053426       1 shared_informer.go:262] Caches are synced for service account
	I0211 02:59:34.056028       1 shared_informer.go:262] Caches are synced for TTL
	I0211 02:59:34.056227       1 shared_informer.go:262] Caches are synced for persistent volume
	I0211 02:59:34.056402       1 shared_informer.go:262] Caches are synced for ephemeral
	I0211 02:59:34.073878       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0211 02:59:34.124960       1 shared_informer.go:262] Caches are synced for deployment
	I0211 02:59:34.140473       1 shared_informer.go:262] Caches are synced for disruption
	I0211 02:59:34.140500       1 disruption.go:371] Sending events to api server.
	I0211 02:59:34.160703       1 shared_informer.go:262] Caches are synced for taint
	I0211 02:59:34.160922       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I0211 02:59:34.161015       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W0211 02:59:34.161184       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-813040. Assuming now as a timestamp.
	I0211 02:59:34.161291       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0211 02:59:34.161664       1 event.go:294] "Event occurred" object="test-preload-813040" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-813040 event: Registered Node test-preload-813040 in Controller"
	I0211 02:59:34.183568       1 shared_informer.go:262] Caches are synced for daemon sets
	I0211 02:59:34.218317       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0211 02:59:34.228056       1 shared_informer.go:262] Caches are synced for resource quota
	I0211 02:59:34.251797       1 shared_informer.go:262] Caches are synced for resource quota
	I0211 02:59:34.260299       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0211 02:59:34.681693       1 shared_informer.go:262] Caches are synced for garbage collector
	I0211 02:59:34.681791       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0211 02:59:34.681897       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [d700edf5d221d642e21d9f9e3c16d915778e5d696e18e44696218a811fed8fc5] <==
	I0211 02:59:22.938266       1 node.go:163] Successfully retrieved node IP: 192.168.39.238
	I0211 02:59:22.938401       1 server_others.go:138] "Detected node IP" address="192.168.39.238"
	I0211 02:59:22.938448       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0211 02:59:22.970143       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0211 02:59:22.970214       1 server_others.go:206] "Using iptables Proxier"
	I0211 02:59:22.970271       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0211 02:59:22.970615       1 server.go:661] "Version info" version="v1.24.4"
	I0211 02:59:22.970644       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0211 02:59:22.973061       1 config.go:317] "Starting service config controller"
	I0211 02:59:22.973077       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0211 02:59:22.973096       1 config.go:226] "Starting endpoint slice config controller"
	I0211 02:59:22.973100       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0211 02:59:22.973944       1 config.go:444] "Starting node config controller"
	I0211 02:59:22.973972       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0211 02:59:23.073504       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0211 02:59:23.073648       1 shared_informer.go:262] Caches are synced for service config
	I0211 02:59:23.074800       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [300aaeda75a053e40273596a1f233ab1a6fb0f63c5c9a46f5d50ac8a288442b0] <==
	I0211 02:59:17.244189       1 serving.go:348] Generated self-signed cert in-memory
	W0211 02:59:20.895898       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0211 02:59:20.896065       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0211 02:59:20.896132       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0211 02:59:20.896222       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0211 02:59:20.930985       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0211 02:59:20.931100       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0211 02:59:20.941251       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0211 02:59:20.941485       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0211 02:59:20.943222       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0211 02:59:20.952753       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0211 02:59:21.044220       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 11 02:59:20 test-preload-813040 kubelet[1122]: I0211 02:59:20.983501    1122 setters.go:532] "Node became not ready" node="test-preload-813040" condition={Type:Ready Status:False LastHeartbeatTime:2025-02-11 02:59:20.983444107 +0000 UTC m=+5.454505341 LastTransitionTime:2025-02-11 02:59:20.983444107 +0000 UTC m=+5.454505341 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?}
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.646323    1122 apiserver.go:52] "Watching apiserver"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.650742    1122 topology_manager.go:200] "Topology Admit Handler"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.651998    1122 topology_manager.go:200] "Topology Admit Handler"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.653742    1122 topology_manager.go:200] "Topology Admit Handler"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: E0211 02:59:21.653958    1122 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-csxl9" podUID=37d5f114-25dc-457d-8d25-6b40bbe680b9
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.718607    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/637a5162-7eca-45fb-80b3-8de7ba1671e9-kube-proxy\") pod \"kube-proxy-zm5w7\" (UID: \"637a5162-7eca-45fb-80b3-8de7ba1671e9\") " pod="kube-system/kube-proxy-zm5w7"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.718775    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37d5f114-25dc-457d-8d25-6b40bbe680b9-config-volume\") pod \"coredns-6d4b75cb6d-csxl9\" (UID: \"37d5f114-25dc-457d-8d25-6b40bbe680b9\") " pod="kube-system/coredns-6d4b75cb6d-csxl9"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.718872    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kqpp\" (UniqueName: \"kubernetes.io/projected/37d5f114-25dc-457d-8d25-6b40bbe680b9-kube-api-access-2kqpp\") pod \"coredns-6d4b75cb6d-csxl9\" (UID: \"37d5f114-25dc-457d-8d25-6b40bbe680b9\") " pod="kube-system/coredns-6d4b75cb6d-csxl9"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.718960    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/637a5162-7eca-45fb-80b3-8de7ba1671e9-xtables-lock\") pod \"kube-proxy-zm5w7\" (UID: \"637a5162-7eca-45fb-80b3-8de7ba1671e9\") " pod="kube-system/kube-proxy-zm5w7"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.718981    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/637a5162-7eca-45fb-80b3-8de7ba1671e9-lib-modules\") pod \"kube-proxy-zm5w7\" (UID: \"637a5162-7eca-45fb-80b3-8de7ba1671e9\") " pod="kube-system/kube-proxy-zm5w7"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.719070    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8f8b\" (UniqueName: \"kubernetes.io/projected/637a5162-7eca-45fb-80b3-8de7ba1671e9-kube-api-access-d8f8b\") pod \"kube-proxy-zm5w7\" (UID: \"637a5162-7eca-45fb-80b3-8de7ba1671e9\") " pod="kube-system/kube-proxy-zm5w7"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.719192    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ff504d6a-f078-44f3-a313-d0e7f19889fb-tmp\") pod \"storage-provisioner\" (UID: \"ff504d6a-f078-44f3-a313-d0e7f19889fb\") " pod="kube-system/storage-provisioner"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.719248    1122 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4cb8\" (UniqueName: \"kubernetes.io/projected/ff504d6a-f078-44f3-a313-d0e7f19889fb-kube-api-access-r4cb8\") pod \"storage-provisioner\" (UID: \"ff504d6a-f078-44f3-a313-d0e7f19889fb\") " pod="kube-system/storage-provisioner"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: I0211 02:59:21.719339    1122 reconciler.go:159] "Reconciler: start to sync state"
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: E0211 02:59:21.821311    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 11 02:59:21 test-preload-813040 kubelet[1122]: E0211 02:59:21.821590    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/37d5f114-25dc-457d-8d25-6b40bbe680b9-config-volume podName:37d5f114-25dc-457d-8d25-6b40bbe680b9 nodeName:}" failed. No retries permitted until 2025-02-11 02:59:22.321512954 +0000 UTC m=+6.792574202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/37d5f114-25dc-457d-8d25-6b40bbe680b9-config-volume") pod "coredns-6d4b75cb6d-csxl9" (UID: "37d5f114-25dc-457d-8d25-6b40bbe680b9") : object "kube-system"/"coredns" not registered
	Feb 11 02:59:22 test-preload-813040 kubelet[1122]: E0211 02:59:22.325068    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 11 02:59:22 test-preload-813040 kubelet[1122]: E0211 02:59:22.325131    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/37d5f114-25dc-457d-8d25-6b40bbe680b9-config-volume podName:37d5f114-25dc-457d-8d25-6b40bbe680b9 nodeName:}" failed. No retries permitted until 2025-02-11 02:59:23.32511858 +0000 UTC m=+7.796179815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/37d5f114-25dc-457d-8d25-6b40bbe680b9-config-volume") pod "coredns-6d4b75cb6d-csxl9" (UID: "37d5f114-25dc-457d-8d25-6b40bbe680b9") : object "kube-system"/"coredns" not registered
	Feb 11 02:59:22 test-preload-813040 kubelet[1122]: E0211 02:59:22.743303    1122 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-csxl9" podUID=37d5f114-25dc-457d-8d25-6b40bbe680b9
	Feb 11 02:59:23 test-preload-813040 kubelet[1122]: E0211 02:59:23.332051    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 11 02:59:23 test-preload-813040 kubelet[1122]: E0211 02:59:23.332136    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/37d5f114-25dc-457d-8d25-6b40bbe680b9-config-volume podName:37d5f114-25dc-457d-8d25-6b40bbe680b9 nodeName:}" failed. No retries permitted until 2025-02-11 02:59:25.332121407 +0000 UTC m=+9.803182653 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/37d5f114-25dc-457d-8d25-6b40bbe680b9-config-volume") pod "coredns-6d4b75cb6d-csxl9" (UID: "37d5f114-25dc-457d-8d25-6b40bbe680b9") : object "kube-system"/"coredns" not registered
	Feb 11 02:59:24 test-preload-813040 kubelet[1122]: E0211 02:59:24.744217    1122 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-csxl9" podUID=37d5f114-25dc-457d-8d25-6b40bbe680b9
	Feb 11 02:59:25 test-preload-813040 kubelet[1122]: E0211 02:59:25.355384    1122 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 11 02:59:25 test-preload-813040 kubelet[1122]: E0211 02:59:25.355495    1122 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/37d5f114-25dc-457d-8d25-6b40bbe680b9-config-volume podName:37d5f114-25dc-457d-8d25-6b40bbe680b9 nodeName:}" failed. No retries permitted until 2025-02-11 02:59:29.355474572 +0000 UTC m=+13.826535818 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/37d5f114-25dc-457d-8d25-6b40bbe680b9-config-volume") pod "coredns-6d4b75cb6d-csxl9" (UID: "37d5f114-25dc-457d-8d25-6b40bbe680b9") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [7b5d4fbe762149c2dc3094d64d1488be325cea58fd58ae1d36a71590f9d94856] <==
	I0211 02:59:22.775197       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-813040 -n test-preload-813040
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-813040 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-813040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-813040
--- FAIL: TestPreload (290.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (383.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-241335 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-241335 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m45.419916916s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-241335] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-241335" primary control-plane node in "kubernetes-upgrade-241335" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 03:05:15.233919   58214 out.go:345] Setting OutFile to fd 1 ...
	I0211 03:05:15.234058   58214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:05:15.234069   58214 out.go:358] Setting ErrFile to fd 2...
	I0211 03:05:15.234077   58214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:05:15.234355   58214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 03:05:15.235178   58214 out.go:352] Setting JSON to false
	I0211 03:05:15.236466   58214 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6466,"bootTime":1739236649,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 03:05:15.236604   58214 start.go:139] virtualization: kvm guest
	I0211 03:05:15.238696   58214 out.go:177] * [kubernetes-upgrade-241335] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 03:05:15.239834   58214 notify.go:220] Checking for updates...
	I0211 03:05:15.239847   58214 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 03:05:15.241006   58214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 03:05:15.242073   58214 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:05:15.243102   58214 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:05:15.244151   58214 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 03:05:15.245228   58214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 03:05:15.246843   58214 config.go:182] Loaded profile config "NoKubernetes-369064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0211 03:05:15.246992   58214 config.go:182] Loaded profile config "cert-expiration-411526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:05:15.247134   58214 config.go:182] Loaded profile config "running-upgrade-378121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0211 03:05:15.247244   58214 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 03:05:15.288919   58214 out.go:177] * Using the kvm2 driver based on user configuration
	I0211 03:05:15.290346   58214 start.go:297] selected driver: kvm2
	I0211 03:05:15.290368   58214 start.go:901] validating driver "kvm2" against <nil>
	I0211 03:05:15.290379   58214 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 03:05:15.291081   58214 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:05:15.291150   58214 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20400-12456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0211 03:05:15.306404   58214 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0211 03:05:15.306461   58214 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0211 03:05:15.306697   58214 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0211 03:05:15.306724   58214 cni.go:84] Creating CNI manager for ""
	I0211 03:05:15.306769   58214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 03:05:15.306777   58214 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0211 03:05:15.306823   58214 start.go:340] cluster config:
	{Name:kubernetes-upgrade-241335 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-241335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:05:15.306952   58214 iso.go:125] acquiring lock: {Name:mkf866c6e52b4efa55cc59a9f329105471716f9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:05:15.308676   58214 out.go:177] * Starting "kubernetes-upgrade-241335" primary control-plane node in "kubernetes-upgrade-241335" cluster
	I0211 03:05:15.309692   58214 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0211 03:05:15.309722   58214 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0211 03:05:15.309730   58214 cache.go:56] Caching tarball of preloaded images
	I0211 03:05:15.309808   58214 preload.go:172] Found /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0211 03:05:15.309823   58214 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0211 03:05:15.309921   58214 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/config.json ...
	I0211 03:05:15.309946   58214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/config.json: {Name:mk9d00623ed983fe00b0ecccf9edb6cc2ca2664b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:05:15.310101   58214 start.go:360] acquireMachinesLock for kubernetes-upgrade-241335: {Name:mk0cbf79bfabdb28d0a301765db34c154a72eff0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0211 03:05:33.331140   58214 start.go:364] duration metric: took 18.021015855s to acquireMachinesLock for "kubernetes-upgrade-241335"
	I0211 03:05:33.331210   58214 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-241335 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernete
s-upgrade-241335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 03:05:33.331315   58214 start.go:125] createHost starting for "" (driver="kvm2")
	I0211 03:05:33.333646   58214 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0211 03:05:33.333825   58214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:05:33.333865   58214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:05:33.350273   58214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37925
	I0211 03:05:33.350660   58214 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:05:33.351205   58214 main.go:141] libmachine: Using API Version  1
	I0211 03:05:33.351225   58214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:05:33.351605   58214 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:05:33.351798   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetMachineName
	I0211 03:05:33.351940   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:05:33.352106   58214 start.go:159] libmachine.API.Create for "kubernetes-upgrade-241335" (driver="kvm2")
	I0211 03:05:33.352140   58214 client.go:168] LocalClient.Create starting
	I0211 03:05:33.352170   58214 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem
	I0211 03:05:33.352226   58214 main.go:141] libmachine: Decoding PEM data...
	I0211 03:05:33.352248   58214 main.go:141] libmachine: Parsing certificate...
	I0211 03:05:33.352320   58214 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem
	I0211 03:05:33.352349   58214 main.go:141] libmachine: Decoding PEM data...
	I0211 03:05:33.352366   58214 main.go:141] libmachine: Parsing certificate...
	I0211 03:05:33.352390   58214 main.go:141] libmachine: Running pre-create checks...
	I0211 03:05:33.352407   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .PreCreateCheck
	I0211 03:05:33.352811   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetConfigRaw
	I0211 03:05:33.353189   58214 main.go:141] libmachine: Creating machine...
	I0211 03:05:33.353204   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .Create
	I0211 03:05:33.353334   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) creating KVM machine...
	I0211 03:05:33.353351   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) creating network...
	I0211 03:05:33.354568   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found existing default KVM network
	I0211 03:05:33.355980   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:33.355821   58445 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c5:2c:7d} reservation:<nil>}
	I0211 03:05:33.357103   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:33.357009   58445 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00027a660}
	I0211 03:05:33.357130   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | created network xml: 
	I0211 03:05:33.357143   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | <network>
	I0211 03:05:33.357153   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG |   <name>mk-kubernetes-upgrade-241335</name>
	I0211 03:05:33.357168   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG |   <dns enable='no'/>
	I0211 03:05:33.357188   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG |   
	I0211 03:05:33.357232   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0211 03:05:33.357269   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG |     <dhcp>
	I0211 03:05:33.357292   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0211 03:05:33.357299   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG |     </dhcp>
	I0211 03:05:33.357307   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG |   </ip>
	I0211 03:05:33.357314   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG |   
	I0211 03:05:33.357322   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | </network>
	I0211 03:05:33.357332   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | 
	I0211 03:05:33.362373   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | trying to create private KVM network mk-kubernetes-upgrade-241335 192.168.50.0/24...
	I0211 03:05:33.430174   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | private KVM network mk-kubernetes-upgrade-241335 192.168.50.0/24 created
	I0211 03:05:33.430213   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:33.430124   58445 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:05:33.430227   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) setting up store path in /home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335 ...
	I0211 03:05:33.430278   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) building disk image from file:///home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0211 03:05:33.430309   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Downloading /home/jenkins/minikube-integration/20400-12456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0211 03:05:33.687428   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:33.687294   58445 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/id_rsa...
	I0211 03:05:33.909004   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:33.908839   58445 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/kubernetes-upgrade-241335.rawdisk...
	I0211 03:05:33.909042   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | Writing magic tar header
	I0211 03:05:33.909060   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | Writing SSH key tar header
	I0211 03:05:33.909074   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:33.909002   58445 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335 ...
	I0211 03:05:33.909184   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335
	I0211 03:05:33.909220   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube/machines
	I0211 03:05:33.909235   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335 (perms=drwx------)
	I0211 03:05:33.909252   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube/machines (perms=drwxr-xr-x)
	I0211 03:05:33.909262   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube (perms=drwxr-xr-x)
	I0211 03:05:33.909283   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) setting executable bit set on /home/jenkins/minikube-integration/20400-12456 (perms=drwxrwxr-x)
	I0211 03:05:33.909297   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0211 03:05:33.909308   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:05:33.909347   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456
	I0211 03:05:33.909365   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0211 03:05:33.909376   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0211 03:05:33.909393   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) creating domain...
	I0211 03:05:33.909404   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | checking permissions on dir: /home/jenkins
	I0211 03:05:33.909415   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | checking permissions on dir: /home
	I0211 03:05:33.909426   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | skipping /home - not owner
	I0211 03:05:33.910381   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) define libvirt domain using xml: 
	I0211 03:05:33.910411   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) <domain type='kvm'>
	I0211 03:05:33.910419   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)   <name>kubernetes-upgrade-241335</name>
	I0211 03:05:33.910424   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)   <memory unit='MiB'>2200</memory>
	I0211 03:05:33.910429   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)   <vcpu>2</vcpu>
	I0211 03:05:33.910434   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)   <features>
	I0211 03:05:33.910439   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <acpi/>
	I0211 03:05:33.910444   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <apic/>
	I0211 03:05:33.910449   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <pae/>
	I0211 03:05:33.910454   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     
	I0211 03:05:33.910459   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)   </features>
	I0211 03:05:33.910465   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)   <cpu mode='host-passthrough'>
	I0211 03:05:33.910487   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)   
	I0211 03:05:33.910505   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)   </cpu>
	I0211 03:05:33.910514   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)   <os>
	I0211 03:05:33.910519   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <type>hvm</type>
	I0211 03:05:33.910526   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <boot dev='cdrom'/>
	I0211 03:05:33.910530   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <boot dev='hd'/>
	I0211 03:05:33.910536   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <bootmenu enable='no'/>
	I0211 03:05:33.910543   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)   </os>
	I0211 03:05:33.910548   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)   <devices>
	I0211 03:05:33.910557   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <disk type='file' device='cdrom'>
	I0211 03:05:33.910565   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <source file='/home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/boot2docker.iso'/>
	I0211 03:05:33.910573   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <target dev='hdc' bus='scsi'/>
	I0211 03:05:33.910578   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <readonly/>
	I0211 03:05:33.910594   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     </disk>
	I0211 03:05:33.910602   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <disk type='file' device='disk'>
	I0211 03:05:33.910612   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0211 03:05:33.910625   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <source file='/home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/kubernetes-upgrade-241335.rawdisk'/>
	I0211 03:05:33.910638   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <target dev='hda' bus='virtio'/>
	I0211 03:05:33.910643   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     </disk>
	I0211 03:05:33.910648   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <interface type='network'>
	I0211 03:05:33.910656   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <source network='mk-kubernetes-upgrade-241335'/>
	I0211 03:05:33.910661   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <model type='virtio'/>
	I0211 03:05:33.910666   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     </interface>
	I0211 03:05:33.910677   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <interface type='network'>
	I0211 03:05:33.910684   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <source network='default'/>
	I0211 03:05:33.910692   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <model type='virtio'/>
	I0211 03:05:33.910744   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     </interface>
	I0211 03:05:33.910767   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <serial type='pty'>
	I0211 03:05:33.910777   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <target port='0'/>
	I0211 03:05:33.910788   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     </serial>
	I0211 03:05:33.910801   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <console type='pty'>
	I0211 03:05:33.910814   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <target type='serial' port='0'/>
	I0211 03:05:33.910832   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     </console>
	I0211 03:05:33.910844   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     <rng model='virtio'>
	I0211 03:05:33.910850   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)       <backend model='random'>/dev/random</backend>
	I0211 03:05:33.910863   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     </rng>
	I0211 03:05:33.910889   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     
	I0211 03:05:33.910904   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)     
	I0211 03:05:33.910916   58214 main.go:141] libmachine: (kubernetes-upgrade-241335)   </devices>
	I0211 03:05:33.910924   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) </domain>
	I0211 03:05:33.910932   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) 
	I0211 03:05:33.915197   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:d1:50:6c in network default
	I0211 03:05:33.915960   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) starting domain...
	I0211 03:05:33.916001   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) ensuring networks are active...
	I0211 03:05:33.916019   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:33.916818   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Ensuring network default is active
	I0211 03:05:33.917155   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Ensuring network mk-kubernetes-upgrade-241335 is active
	I0211 03:05:33.917712   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) getting domain XML...
	I0211 03:05:33.918644   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) creating domain...
	I0211 03:05:35.177387   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) waiting for IP...
	I0211 03:05:35.178360   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:35.178902   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:35.178933   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:35.178865   58445 retry.go:31] will retry after 219.536994ms: waiting for domain to come up
	I0211 03:05:35.400286   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:35.451804   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:35.451836   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:35.451779   58445 retry.go:31] will retry after 363.228752ms: waiting for domain to come up
	I0211 03:05:35.816943   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:35.817497   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:35.817526   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:35.817473   58445 retry.go:31] will retry after 293.010269ms: waiting for domain to come up
	I0211 03:05:36.112146   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:36.112764   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:36.112793   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:36.112731   58445 retry.go:31] will retry after 578.745381ms: waiting for domain to come up
	I0211 03:05:36.692979   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:36.693458   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:36.693480   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:36.693430   58445 retry.go:31] will retry after 477.139638ms: waiting for domain to come up
	I0211 03:05:37.172231   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:37.172711   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:37.172771   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:37.172693   58445 retry.go:31] will retry after 743.611945ms: waiting for domain to come up
	I0211 03:05:37.917587   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:37.918003   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:37.918025   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:37.917978   58445 retry.go:31] will retry after 1.017711452s: waiting for domain to come up
	I0211 03:05:38.937556   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:38.938122   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:38.938147   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:38.938091   58445 retry.go:31] will retry after 1.193238283s: waiting for domain to come up
	I0211 03:05:40.133361   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:40.133817   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:40.133867   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:40.133805   58445 retry.go:31] will retry after 1.301699229s: waiting for domain to come up
	I0211 03:05:41.437430   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:41.437875   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:41.437904   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:41.437853   58445 retry.go:31] will retry after 1.675851297s: waiting for domain to come up
	I0211 03:05:43.115620   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:43.116125   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:43.116154   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:43.116084   58445 retry.go:31] will retry after 2.779053822s: waiting for domain to come up
	I0211 03:05:45.896369   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:45.896822   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:45.896847   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:45.896802   58445 retry.go:31] will retry after 2.845339275s: waiting for domain to come up
	I0211 03:05:48.744099   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:48.744627   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:48.744681   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:48.744578   58445 retry.go:31] will retry after 2.8865079s: waiting for domain to come up
	I0211 03:05:51.632276   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:51.632715   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find current IP address of domain kubernetes-upgrade-241335 in network mk-kubernetes-upgrade-241335
	I0211 03:05:51.632754   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | I0211 03:05:51.632697   58445 retry.go:31] will retry after 3.713343356s: waiting for domain to come up
	I0211 03:05:55.348305   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.348748   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) found domain IP: 192.168.50.243
	I0211 03:05:55.348779   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has current primary IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.348786   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) reserving static IP address...
	I0211 03:05:55.349199   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-241335", mac: "52:54:00:41:58:16", ip: "192.168.50.243"} in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.420768   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | Getting to WaitForSSH function...
	I0211 03:05:55.420808   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) reserved static IP address 192.168.50.243 for domain kubernetes-upgrade-241335
	I0211 03:05:55.420823   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) waiting for SSH...
	I0211 03:05:55.423538   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.423985   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:minikube Clientid:01:52:54:00:41:58:16}
	I0211 03:05:55.424020   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.424103   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | Using SSH client type: external
	I0211 03:05:55.424165   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | Using SSH private key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/id_rsa (-rw-------)
	I0211 03:05:55.424208   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.243 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0211 03:05:55.424226   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | About to run SSH command:
	I0211 03:05:55.424240   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | exit 0
	I0211 03:05:55.550518   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | SSH cmd err, output: <nil>: 
	I0211 03:05:55.550816   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) KVM machine creation complete
	I0211 03:05:55.551180   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetConfigRaw
	I0211 03:05:55.551762   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:05:55.551948   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:05:55.552098   58214 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0211 03:05:55.552112   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetState
	I0211 03:05:55.553368   58214 main.go:141] libmachine: Detecting operating system of created instance...
	I0211 03:05:55.553386   58214 main.go:141] libmachine: Waiting for SSH to be available...
	I0211 03:05:55.553393   58214 main.go:141] libmachine: Getting to WaitForSSH function...
	I0211 03:05:55.553429   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:05:55.555777   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.556143   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:55.556171   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.556351   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:05:55.556491   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:55.556631   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:55.556749   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:05:55.556889   58214 main.go:141] libmachine: Using SSH client type: native
	I0211 03:05:55.557117   58214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.243 22 <nil> <nil>}
	I0211 03:05:55.557130   58214 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0211 03:05:55.665695   58214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 03:05:55.665719   58214 main.go:141] libmachine: Detecting the provisioner...
	I0211 03:05:55.665726   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:05:55.668419   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.668750   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:55.668797   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.668937   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:05:55.669116   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:55.669280   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:55.669394   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:05:55.669531   58214 main.go:141] libmachine: Using SSH client type: native
	I0211 03:05:55.669708   58214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.243 22 <nil> <nil>}
	I0211 03:05:55.669721   58214 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0211 03:05:55.778991   58214 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0211 03:05:55.779084   58214 main.go:141] libmachine: found compatible host: buildroot
	I0211 03:05:55.779102   58214 main.go:141] libmachine: Provisioning with buildroot...
	I0211 03:05:55.779132   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetMachineName
	I0211 03:05:55.779373   58214 buildroot.go:166] provisioning hostname "kubernetes-upgrade-241335"
	I0211 03:05:55.779401   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetMachineName
	I0211 03:05:55.779586   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:05:55.781892   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.782186   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:55.782217   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.782354   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:05:55.782521   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:55.782675   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:55.782801   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:05:55.782965   58214 main.go:141] libmachine: Using SSH client type: native
	I0211 03:05:55.783141   58214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.243 22 <nil> <nil>}
	I0211 03:05:55.783157   58214 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-241335 && echo "kubernetes-upgrade-241335" | sudo tee /etc/hostname
	I0211 03:05:55.908524   58214 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-241335
	
	I0211 03:05:55.908554   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:05:55.911315   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.911702   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:55.911731   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:55.911921   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:05:55.912097   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:55.912249   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:55.912356   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:05:55.912511   58214 main.go:141] libmachine: Using SSH client type: native
	I0211 03:05:55.912680   58214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.243 22 <nil> <nil>}
	I0211 03:05:55.912699   58214 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-241335' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-241335/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-241335' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0211 03:05:56.030728   58214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 03:05:56.030755   58214 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20400-12456/.minikube CaCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20400-12456/.minikube}
	I0211 03:05:56.030774   58214 buildroot.go:174] setting up certificates
	I0211 03:05:56.030786   58214 provision.go:84] configureAuth start
	I0211 03:05:56.030810   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetMachineName
	I0211 03:05:56.031122   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetIP
	I0211 03:05:56.033496   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.033805   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:56.033836   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.033937   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:05:56.035942   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.036256   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:56.036293   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.036421   58214 provision.go:143] copyHostCerts
	I0211 03:05:56.036467   58214 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem, removing ...
	I0211 03:05:56.036483   58214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem
	I0211 03:05:56.036543   58214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem (1078 bytes)
	I0211 03:05:56.036660   58214 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem, removing ...
	I0211 03:05:56.036668   58214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem
	I0211 03:05:56.036687   58214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem (1123 bytes)
	I0211 03:05:56.036750   58214 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem, removing ...
	I0211 03:05:56.036757   58214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem
	I0211 03:05:56.036774   58214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem (1679 bytes)
	I0211 03:05:56.036831   58214 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-241335 san=[127.0.0.1 192.168.50.243 kubernetes-upgrade-241335 localhost minikube]
	I0211 03:05:56.151248   58214 provision.go:177] copyRemoteCerts
	I0211 03:05:56.151311   58214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0211 03:05:56.151334   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:05:56.153991   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.154320   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:56.154359   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.154516   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:05:56.154685   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:56.154831   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:05:56.155015   58214 sshutil.go:53] new ssh client: &{IP:192.168.50.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/id_rsa Username:docker}
	I0211 03:05:56.240355   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0211 03:05:56.262308   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0211 03:05:56.284088   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0211 03:05:56.305278   58214 provision.go:87] duration metric: took 274.478028ms to configureAuth
	I0211 03:05:56.305306   58214 buildroot.go:189] setting minikube options for container-runtime
	I0211 03:05:56.305499   58214 config.go:182] Loaded profile config "kubernetes-upgrade-241335": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0211 03:05:56.305581   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:05:56.309114   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.309621   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:56.309652   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.309819   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:05:56.309998   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:56.310153   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:56.310292   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:05:56.310445   58214 main.go:141] libmachine: Using SSH client type: native
	I0211 03:05:56.310622   58214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.243 22 <nil> <nil>}
	I0211 03:05:56.310637   58214 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0211 03:05:56.539462   58214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0211 03:05:56.539493   58214 main.go:141] libmachine: Checking connection to Docker...
	I0211 03:05:56.539502   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetURL
	I0211 03:05:56.540766   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | using libvirt version 6000000
	I0211 03:05:56.542810   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.543116   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:56.543147   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.543239   58214 main.go:141] libmachine: Docker is up and running!
	I0211 03:05:56.543252   58214 main.go:141] libmachine: Reticulating splines...
	I0211 03:05:56.543261   58214 client.go:171] duration metric: took 23.191113289s to LocalClient.Create
	I0211 03:05:56.543288   58214 start.go:167] duration metric: took 23.191183695s to libmachine.API.Create "kubernetes-upgrade-241335"
	I0211 03:05:56.543302   58214 start.go:293] postStartSetup for "kubernetes-upgrade-241335" (driver="kvm2")
	I0211 03:05:56.543315   58214 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0211 03:05:56.543339   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:05:56.543555   58214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0211 03:05:56.543574   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:05:56.545447   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.545692   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:56.545720   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.545811   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:05:56.545994   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:56.546143   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:05:56.546284   58214 sshutil.go:53] new ssh client: &{IP:192.168.50.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/id_rsa Username:docker}
	I0211 03:05:56.633036   58214 ssh_runner.go:195] Run: cat /etc/os-release
	I0211 03:05:56.637121   58214 info.go:137] Remote host: Buildroot 2023.02.9
	I0211 03:05:56.637151   58214 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/addons for local assets ...
	I0211 03:05:56.637219   58214 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/files for local assets ...
	I0211 03:05:56.637309   58214 filesync.go:149] local asset: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem -> 196452.pem in /etc/ssl/certs
	I0211 03:05:56.637418   58214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0211 03:05:56.646191   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:05:56.667556   58214 start.go:296] duration metric: took 124.242237ms for postStartSetup
	I0211 03:05:56.667600   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetConfigRaw
	I0211 03:05:56.668181   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetIP
	I0211 03:05:56.670739   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.671148   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:56.671180   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.671355   58214 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/config.json ...
	I0211 03:05:56.671541   58214 start.go:128] duration metric: took 23.340214261s to createHost
	I0211 03:05:56.671562   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:05:56.673475   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.673741   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:56.673769   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.673888   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:05:56.674045   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:56.674180   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:56.674305   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:05:56.674450   58214 main.go:141] libmachine: Using SSH client type: native
	I0211 03:05:56.674651   58214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.243 22 <nil> <nil>}
	I0211 03:05:56.674665   58214 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0211 03:05:56.787681   58214 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739243156.764449975
	
	I0211 03:05:56.787707   58214 fix.go:216] guest clock: 1739243156.764449975
	I0211 03:05:56.787716   58214 fix.go:229] Guest: 2025-02-11 03:05:56.764449975 +0000 UTC Remote: 2025-02-11 03:05:56.671552615 +0000 UTC m=+41.482865829 (delta=92.89736ms)
	I0211 03:05:56.787741   58214 fix.go:200] guest clock delta is within tolerance: 92.89736ms
	I0211 03:05:56.787748   58214 start.go:83] releasing machines lock for "kubernetes-upgrade-241335", held for 23.456572384s
	I0211 03:05:56.787775   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:05:56.788047   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetIP
	I0211 03:05:56.790538   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.790866   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:56.790914   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.791034   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:05:56.791482   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:05:56.791654   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:05:56.791746   58214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0211 03:05:56.791787   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:05:56.791890   58214 ssh_runner.go:195] Run: cat /version.json
	I0211 03:05:56.791945   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:05:56.794963   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.795277   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.795321   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:56.795421   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.795615   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:05:56.795780   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:56.795809   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:56.795780   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:56.795964   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:05:56.796051   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:05:56.796117   58214 sshutil.go:53] new ssh client: &{IP:192.168.50.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/id_rsa Username:docker}
	I0211 03:05:56.796188   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:05:56.796316   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:05:56.796429   58214 sshutil.go:53] new ssh client: &{IP:192.168.50.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/id_rsa Username:docker}
	I0211 03:05:56.875437   58214 ssh_runner.go:195] Run: systemctl --version
	I0211 03:05:56.899757   58214 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0211 03:05:57.059658   58214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0211 03:05:57.065409   58214 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0211 03:05:57.065484   58214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 03:05:57.080573   58214 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0211 03:05:57.080599   58214 start.go:495] detecting cgroup driver to use...
	I0211 03:05:57.080669   58214 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0211 03:05:57.097424   58214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0211 03:05:57.111785   58214 docker.go:217] disabling cri-docker service (if available) ...
	I0211 03:05:57.111861   58214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0211 03:05:57.125647   58214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0211 03:05:57.139790   58214 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0211 03:05:57.265453   58214 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0211 03:05:57.420869   58214 docker.go:233] disabling docker service ...
	I0211 03:05:57.420936   58214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0211 03:05:57.435274   58214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0211 03:05:57.447525   58214 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0211 03:05:57.575819   58214 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0211 03:05:57.693347   58214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0211 03:05:57.708363   58214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0211 03:05:57.726478   58214 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0211 03:05:57.726533   58214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:05:57.736289   58214 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0211 03:05:57.736342   58214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:05:57.745904   58214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:05:57.755473   58214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:05:57.764944   58214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0211 03:05:57.774532   58214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0211 03:05:57.783307   58214 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0211 03:05:57.783370   58214 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0211 03:05:57.795382   58214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0211 03:05:57.805638   58214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:05:57.923228   58214 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0211 03:05:58.015088   58214 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0211 03:05:58.015182   58214 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0211 03:05:58.019808   58214 start.go:563] Will wait 60s for crictl version
	I0211 03:05:58.019871   58214 ssh_runner.go:195] Run: which crictl
	I0211 03:05:58.023548   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0211 03:05:58.073160   58214 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0211 03:05:58.073241   58214 ssh_runner.go:195] Run: crio --version
	I0211 03:05:58.104038   58214 ssh_runner.go:195] Run: crio --version
	I0211 03:05:58.137803   58214 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0211 03:05:58.138769   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetIP
	I0211 03:05:58.141704   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:58.142138   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:05:47 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:05:58.142171   58214 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:05:58.142377   58214 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0211 03:05:58.146159   58214 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 03:05:58.161999   58214 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-241335 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-241335 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0211 03:05:58.162106   58214 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0211 03:05:58.162153   58214 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:05:58.194422   58214 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0211 03:05:58.194483   58214 ssh_runner.go:195] Run: which lz4
	I0211 03:05:58.198349   58214 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0211 03:05:58.202615   58214 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0211 03:05:58.202644   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0211 03:05:59.735555   58214 crio.go:462] duration metric: took 1.537230105s to copy over tarball
	I0211 03:05:59.735636   58214 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0211 03:06:02.314299   58214 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.578626399s)
	I0211 03:06:02.314339   58214 crio.go:469] duration metric: took 2.578748816s to extract the tarball
	I0211 03:06:02.314349   58214 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0211 03:06:02.361018   58214 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:06:02.409570   58214 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0211 03:06:02.409615   58214 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0211 03:06:02.409689   58214 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:06:02.409967   58214 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:06:02.410169   58214 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:06:02.410338   58214 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:06:02.410512   58214 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:06:02.410676   58214 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0211 03:06:02.410684   58214 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0211 03:06:02.410910   58214 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0211 03:06:02.412445   58214 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:06:02.412464   58214 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:06:02.412463   58214 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:06:02.412476   58214 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:06:02.412493   58214 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:06:02.412541   58214 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0211 03:06:02.412544   58214 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0211 03:06:02.412943   58214 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0211 03:06:02.551380   58214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:06:02.566972   58214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:06:02.569757   58214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:06:02.579481   58214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0211 03:06:02.580995   58214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0211 03:06:02.584748   58214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0211 03:06:02.598248   58214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:06:02.601900   58214 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0211 03:06:02.601943   58214 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:06:02.601989   58214 ssh_runner.go:195] Run: which crictl
	I0211 03:06:02.677734   58214 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0211 03:06:02.677783   58214 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:06:02.677793   58214 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0211 03:06:02.677822   58214 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:06:02.677834   58214 ssh_runner.go:195] Run: which crictl
	I0211 03:06:02.677853   58214 ssh_runner.go:195] Run: which crictl
	I0211 03:06:02.704622   58214 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0211 03:06:02.704668   58214 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0211 03:06:02.704715   58214 ssh_runner.go:195] Run: which crictl
	I0211 03:06:02.719863   58214 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0211 03:06:02.719905   58214 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0211 03:06:02.719956   58214 ssh_runner.go:195] Run: which crictl
	I0211 03:06:02.720014   58214 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0211 03:06:02.720050   58214 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0211 03:06:02.720086   58214 ssh_runner.go:195] Run: which crictl
	I0211 03:06:02.735744   58214 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0211 03:06:02.735792   58214 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:06:02.735845   58214 ssh_runner.go:195] Run: which crictl
	I0211 03:06:02.735895   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:06:02.735943   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:06:02.735997   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:06:02.736017   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0211 03:06:02.736053   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0211 03:06:02.736095   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0211 03:06:02.866799   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0211 03:06:02.866900   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:06:02.866930   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0211 03:06:02.866994   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:06:02.867011   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0211 03:06:02.869694   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:06:02.869749   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:06:02.959960   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0211 03:06:03.004069   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0211 03:06:03.021001   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:06:03.021098   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:06:03.021120   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0211 03:06:03.021217   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:06:03.021263   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:06:03.093847   58214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0211 03:06:03.128924   58214 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:06:03.133483   58214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0211 03:06:03.173889   58214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0211 03:06:03.173961   58214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0211 03:06:03.174032   58214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0211 03:06:03.174052   58214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0211 03:06:03.182966   58214 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0211 03:06:03.372837   58214 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:06:03.510828   58214 cache_images.go:92] duration metric: took 1.101193611s to LoadCachedImages
	W0211 03:06:03.510929   58214 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0211 03:06:03.510947   58214 kubeadm.go:934] updating node { 192.168.50.243 8443 v1.20.0 crio true true} ...
	I0211 03:06:03.511074   58214 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-241335 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-241335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0211 03:06:03.511186   58214 ssh_runner.go:195] Run: crio config
	I0211 03:06:03.560620   58214 cni.go:84] Creating CNI manager for ""
	I0211 03:06:03.560655   58214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 03:06:03.560674   58214 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 03:06:03.560701   58214 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.243 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-241335 NodeName:kubernetes-upgrade-241335 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0211 03:06:03.560882   58214 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-241335"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.243
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.243"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 03:06:03.560952   58214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0211 03:06:03.570545   58214 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 03:06:03.570616   58214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 03:06:03.579587   58214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0211 03:06:03.596293   58214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 03:06:03.612895   58214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0211 03:06:03.629885   58214 ssh_runner.go:195] Run: grep 192.168.50.243	control-plane.minikube.internal$ /etc/hosts
	I0211 03:06:03.633941   58214 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.243	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 03:06:03.646522   58214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:06:03.773704   58214 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:06:03.791869   58214 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335 for IP: 192.168.50.243
	I0211 03:06:03.791914   58214 certs.go:194] generating shared ca certs ...
	I0211 03:06:03.791936   58214 certs.go:226] acquiring lock for ca certs: {Name:mk14e70e4f3b98aff6eac535114852cc1d70eb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:06:03.792131   58214 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key
	I0211 03:06:03.792192   58214 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key
	I0211 03:06:03.792204   58214 certs.go:256] generating profile certs ...
	I0211 03:06:03.792269   58214 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/client.key
	I0211 03:06:03.792288   58214 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/client.crt with IP's: []
	I0211 03:06:03.860463   58214 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/client.crt ...
	I0211 03:06:03.860492   58214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/client.crt: {Name:mk21eded828fb31b335d4341a1ec6551d0d2076d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:06:03.860657   58214 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/client.key ...
	I0211 03:06:03.860669   58214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/client.key: {Name:mk4fdf6a25d1ba3f267a163b6c5df7b90fe7997c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:06:03.860753   58214 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.key.6266be30
	I0211 03:06:03.860769   58214 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.crt.6266be30 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.243]
	I0211 03:06:03.992185   58214 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.crt.6266be30 ...
	I0211 03:06:03.992215   58214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.crt.6266be30: {Name:mk5e3ede9d58ad719249d9648354e846bf5d7109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:06:03.992388   58214 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.key.6266be30 ...
	I0211 03:06:03.992401   58214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.key.6266be30: {Name:mke1fd0db54cc3b0b5fd9bcf30a40aafb52df70c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:06:03.992471   58214 certs.go:381] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.crt.6266be30 -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.crt
	I0211 03:06:03.992543   58214 certs.go:385] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.key.6266be30 -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.key
	I0211 03:06:03.992593   58214 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/proxy-client.key
	I0211 03:06:03.992608   58214 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/proxy-client.crt with IP's: []
	I0211 03:06:04.110973   58214 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/proxy-client.crt ...
	I0211 03:06:04.111004   58214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/proxy-client.crt: {Name:mke9cd17103e9dfccaf2d386674d60fbde8313a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:06:04.111180   58214 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/proxy-client.key ...
	I0211 03:06:04.111194   58214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/proxy-client.key: {Name:mk6c271160f41602a5c70447cb3c5b545e5f20fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:06:04.111364   58214 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem (1338 bytes)
	W0211 03:06:04.111400   58214 certs.go:480] ignoring /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645_empty.pem, impossibly tiny 0 bytes
	I0211 03:06:04.111410   58214 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem (1675 bytes)
	I0211 03:06:04.111433   58214 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem (1078 bytes)
	I0211 03:06:04.111464   58214 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem (1123 bytes)
	I0211 03:06:04.111482   58214 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem (1679 bytes)
	I0211 03:06:04.111520   58214 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:06:04.112154   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 03:06:04.141286   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 03:06:04.168892   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 03:06:04.196195   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0211 03:06:04.223993   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0211 03:06:04.248334   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0211 03:06:04.271977   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 03:06:04.295365   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0211 03:06:04.319407   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 03:06:04.346947   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem --> /usr/share/ca-certificates/19645.pem (1338 bytes)
	I0211 03:06:04.371490   58214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /usr/share/ca-certificates/196452.pem (1708 bytes)
	I0211 03:06:04.397469   58214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 03:06:04.417624   58214 ssh_runner.go:195] Run: openssl version
	I0211 03:06:04.425369   58214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 03:06:04.439746   58214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:06:04.445489   58214 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:06:04.445559   58214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:06:04.451702   58214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 03:06:04.462721   58214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19645.pem && ln -fs /usr/share/ca-certificates/19645.pem /etc/ssl/certs/19645.pem"
	I0211 03:06:04.473627   58214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19645.pem
	I0211 03:06:04.478645   58214 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 11 02:09 /usr/share/ca-certificates/19645.pem
	I0211 03:06:04.478705   58214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19645.pem
	I0211 03:06:04.484195   58214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19645.pem /etc/ssl/certs/51391683.0"
	I0211 03:06:04.494698   58214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/196452.pem && ln -fs /usr/share/ca-certificates/196452.pem /etc/ssl/certs/196452.pem"
	I0211 03:06:04.505958   58214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/196452.pem
	I0211 03:06:04.510494   58214 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 11 02:09 /usr/share/ca-certificates/196452.pem
	I0211 03:06:04.510549   58214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/196452.pem
	I0211 03:06:04.516046   58214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/196452.pem /etc/ssl/certs/3ec20f2e.0"
	I0211 03:06:04.526432   58214 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 03:06:04.530481   58214 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0211 03:06:04.530530   58214 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-241335 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-241335 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.243 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:06:04.530600   58214 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 03:06:04.530658   58214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 03:06:04.580423   58214 cri.go:89] found id: ""
	I0211 03:06:04.580502   58214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0211 03:06:04.590559   58214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 03:06:04.600111   58214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 03:06:04.612744   58214 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 03:06:04.612768   58214 kubeadm.go:157] found existing configuration files:
	
	I0211 03:06:04.612820   58214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 03:06:04.636942   58214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 03:06:04.637018   58214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 03:06:04.649214   58214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 03:06:04.660946   58214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 03:06:04.661007   58214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 03:06:04.674815   58214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 03:06:04.690075   58214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 03:06:04.690157   58214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 03:06:04.700694   58214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 03:06:04.709982   58214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 03:06:04.710064   58214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 03:06:04.721122   58214 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0211 03:06:04.827438   58214 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0211 03:06:04.827584   58214 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 03:06:04.991972   58214 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 03:06:04.992124   58214 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 03:06:04.992257   58214 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0211 03:06:05.165577   58214 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 03:06:05.357880   58214 out.go:235]   - Generating certificates and keys ...
	I0211 03:06:05.358003   58214 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 03:06:05.358118   58214 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 03:06:05.510177   58214 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0211 03:06:05.691522   58214 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0211 03:06:05.805834   58214 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0211 03:06:06.102773   58214 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0211 03:06:06.386799   58214 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0211 03:06:06.387006   58214 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-241335 localhost] and IPs [192.168.50.243 127.0.0.1 ::1]
	I0211 03:06:06.448710   58214 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0211 03:06:06.449059   58214 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-241335 localhost] and IPs [192.168.50.243 127.0.0.1 ::1]
	I0211 03:06:06.692636   58214 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0211 03:06:06.851453   58214 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0211 03:06:07.144508   58214 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0211 03:06:07.144701   58214 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 03:06:07.243280   58214 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 03:06:07.347624   58214 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 03:06:07.433894   58214 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 03:06:07.711783   58214 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 03:06:07.728792   58214 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 03:06:07.730343   58214 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 03:06:07.730429   58214 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 03:06:07.869893   58214 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 03:06:07.886064   58214 out.go:235]   - Booting up control plane ...
	I0211 03:06:07.886204   58214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 03:06:07.886348   58214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 03:06:07.886527   58214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 03:06:07.886666   58214 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 03:06:07.889808   58214 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0211 03:06:47.883347   58214 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0211 03:06:47.883740   58214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:06:47.884016   58214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:06:52.885005   58214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:06:52.885283   58214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:07:02.884564   58214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:07:02.884810   58214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:07:22.884476   58214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:07:22.884715   58214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:08:02.886297   58214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:08:02.886955   58214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:08:02.886988   58214 kubeadm.go:310] 
	I0211 03:08:02.887138   58214 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0211 03:08:02.887240   58214 kubeadm.go:310] 		timed out waiting for the condition
	I0211 03:08:02.887257   58214 kubeadm.go:310] 
	I0211 03:08:02.887341   58214 kubeadm.go:310] 	This error is likely caused by:
	I0211 03:08:02.887419   58214 kubeadm.go:310] 		- The kubelet is not running
	I0211 03:08:02.887602   58214 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0211 03:08:02.887613   58214 kubeadm.go:310] 
	I0211 03:08:02.887881   58214 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0211 03:08:02.887979   58214 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0211 03:08:02.888047   58214 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0211 03:08:02.888060   58214 kubeadm.go:310] 
	I0211 03:08:02.888362   58214 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0211 03:08:02.888577   58214 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0211 03:08:02.888590   58214 kubeadm.go:310] 
	I0211 03:08:02.888841   58214 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0211 03:08:02.889061   58214 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0211 03:08:02.889232   58214 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0211 03:08:02.889500   58214 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0211 03:08:02.889544   58214 kubeadm.go:310] 
	I0211 03:08:02.889762   58214 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 03:08:02.890346   58214 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0211 03:08:02.890460   58214 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0211 03:08:02.890606   58214 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-241335 localhost] and IPs [192.168.50.243 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-241335 localhost] and IPs [192.168.50.243 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-241335 localhost] and IPs [192.168.50.243 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-241335 localhost] and IPs [192.168.50.243 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0211 03:08:02.890647   58214 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0211 03:08:03.448390   58214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 03:08:03.461692   58214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 03:08:03.471388   58214 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 03:08:03.471414   58214 kubeadm.go:157] found existing configuration files:
	
	I0211 03:08:03.471455   58214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 03:08:03.480749   58214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 03:08:03.480820   58214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 03:08:03.490339   58214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 03:08:03.499766   58214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 03:08:03.499833   58214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 03:08:03.509114   58214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 03:08:03.517821   58214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 03:08:03.517871   58214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 03:08:03.526631   58214 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 03:08:03.535315   58214 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 03:08:03.535380   58214 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 03:08:03.544579   58214 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0211 03:08:03.741145   58214 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 03:10:00.019763   58214 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0211 03:10:00.019861   58214 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0211 03:10:00.021731   58214 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0211 03:10:00.021787   58214 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 03:10:00.021849   58214 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 03:10:00.022004   58214 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 03:10:00.022150   58214 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0211 03:10:00.022238   58214 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 03:10:00.024214   58214 out.go:235]   - Generating certificates and keys ...
	I0211 03:10:00.024280   58214 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 03:10:00.024392   58214 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 03:10:00.024515   58214 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0211 03:10:00.024621   58214 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0211 03:10:00.024726   58214 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0211 03:10:00.024816   58214 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0211 03:10:00.024909   58214 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0211 03:10:00.024991   58214 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0211 03:10:00.025089   58214 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0211 03:10:00.025223   58214 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0211 03:10:00.025285   58214 kubeadm.go:310] [certs] Using the existing "sa" key
	I0211 03:10:00.025396   58214 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 03:10:00.025455   58214 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 03:10:00.025501   58214 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 03:10:00.025564   58214 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 03:10:00.025626   58214 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 03:10:00.025724   58214 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 03:10:00.025796   58214 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 03:10:00.025831   58214 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 03:10:00.025899   58214 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 03:10:00.027204   58214 out.go:235]   - Booting up control plane ...
	I0211 03:10:00.027277   58214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 03:10:00.027353   58214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 03:10:00.027424   58214 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 03:10:00.027508   58214 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 03:10:00.027647   58214 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0211 03:10:00.027717   58214 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0211 03:10:00.027804   58214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:10:00.027975   58214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:10:00.028075   58214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:10:00.028289   58214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:10:00.028353   58214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:10:00.028567   58214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:10:00.028643   58214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:10:00.028828   58214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:10:00.028916   58214 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:10:00.029111   58214 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:10:00.029123   58214 kubeadm.go:310] 
	I0211 03:10:00.029182   58214 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0211 03:10:00.029234   58214 kubeadm.go:310] 		timed out waiting for the condition
	I0211 03:10:00.029245   58214 kubeadm.go:310] 
	I0211 03:10:00.029301   58214 kubeadm.go:310] 	This error is likely caused by:
	I0211 03:10:00.029343   58214 kubeadm.go:310] 		- The kubelet is not running
	I0211 03:10:00.029489   58214 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0211 03:10:00.029500   58214 kubeadm.go:310] 
	I0211 03:10:00.029641   58214 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0211 03:10:00.029689   58214 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0211 03:10:00.029740   58214 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0211 03:10:00.029750   58214 kubeadm.go:310] 
	I0211 03:10:00.029862   58214 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0211 03:10:00.029961   58214 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0211 03:10:00.029970   58214 kubeadm.go:310] 
	I0211 03:10:00.030086   58214 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0211 03:10:00.030203   58214 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0211 03:10:00.030281   58214 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0211 03:10:00.030384   58214 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0211 03:10:00.030459   58214 kubeadm.go:310] 
	I0211 03:10:00.030461   58214 kubeadm.go:394] duration metric: took 3m55.4999346s to StartCluster
	I0211 03:10:00.030503   58214 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:10:00.030566   58214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:10:00.068886   58214 cri.go:89] found id: ""
	I0211 03:10:00.068926   58214 logs.go:282] 0 containers: []
	W0211 03:10:00.068933   58214 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:10:00.068939   58214 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:10:00.068987   58214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:10:00.099712   58214 cri.go:89] found id: ""
	I0211 03:10:00.099744   58214 logs.go:282] 0 containers: []
	W0211 03:10:00.099751   58214 logs.go:284] No container was found matching "etcd"
	I0211 03:10:00.099757   58214 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:10:00.099805   58214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:10:00.135548   58214 cri.go:89] found id: ""
	I0211 03:10:00.135570   58214 logs.go:282] 0 containers: []
	W0211 03:10:00.135577   58214 logs.go:284] No container was found matching "coredns"
	I0211 03:10:00.135582   58214 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:10:00.135626   58214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:10:00.171788   58214 cri.go:89] found id: ""
	I0211 03:10:00.171816   58214 logs.go:282] 0 containers: []
	W0211 03:10:00.171824   58214 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:10:00.171830   58214 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:10:00.171876   58214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:10:00.206165   58214 cri.go:89] found id: ""
	I0211 03:10:00.206191   58214 logs.go:282] 0 containers: []
	W0211 03:10:00.206203   58214 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:10:00.206211   58214 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:10:00.206270   58214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:10:00.237509   58214 cri.go:89] found id: ""
	I0211 03:10:00.237534   58214 logs.go:282] 0 containers: []
	W0211 03:10:00.237541   58214 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:10:00.237548   58214 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:10:00.237593   58214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:10:00.269844   58214 cri.go:89] found id: ""
	I0211 03:10:00.269873   58214 logs.go:282] 0 containers: []
	W0211 03:10:00.269881   58214 logs.go:284] No container was found matching "kindnet"
	I0211 03:10:00.269890   58214 logs.go:123] Gathering logs for dmesg ...
	I0211 03:10:00.269901   58214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:10:00.282335   58214 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:10:00.282359   58214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:10:00.392716   58214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:10:00.392737   58214 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:10:00.392749   58214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:10:00.488656   58214 logs.go:123] Gathering logs for container status ...
	I0211 03:10:00.488689   58214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:10:00.527176   58214 logs.go:123] Gathering logs for kubelet ...
	I0211 03:10:00.527212   58214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0211 03:10:00.592439   58214 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0211 03:10:00.592498   58214 out.go:270] * 
	* 
	W0211 03:10:00.592567   58214 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0211 03:10:00.592584   58214 out.go:270] * 
	* 
	W0211 03:10:00.593430   58214 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0211 03:10:00.596437   58214 out.go:201] 
	W0211 03:10:00.597417   58214 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0211 03:10:00.597455   58214 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0211 03:10:00.597479   58214 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0211 03:10:00.598892   58214 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-241335 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-241335
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-241335: (6.290650755s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-241335 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-241335 status --format={{.Host}}: exit status 7 (61.130612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-241335 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-241335 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.623580585s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-241335 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-241335 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-241335 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (82.772425ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-241335] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-241335
	    minikube start -p kubernetes-upgrade-241335 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2413352 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-241335 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-241335 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-241335 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.464355s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-02-11 03:11:35.240508193 +0000 UTC m=+4186.671578446
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-241335 -n kubernetes-upgrade-241335
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-241335 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-241335 logs -n 25: (1.925549703s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p running-upgrade-378121                              | running-upgrade-378121    | jenkins | v1.35.0 | 11 Feb 25 03:05 UTC | 11 Feb 25 03:05 UTC |
	| start   | -p pause-224871 --memory=2048                          | pause-224871              | jenkins | v1.35.0 | 11 Feb 25 03:05 UTC | 11 Feb 25 03:06 UTC |
	|         | --install-addons=false                                 |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                               |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-369064 sudo                            | NoKubernetes-369064       | jenkins | v1.35.0 | 11 Feb 25 03:05 UTC |                     |
	|         | systemctl is-active --quiet                            |                           |         |         |                     |                     |
	|         | service kubelet                                        |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-369064                                 | NoKubernetes-369064       | jenkins | v1.35.0 | 11 Feb 25 03:05 UTC | 11 Feb 25 03:05 UTC |
	| start   | -p stopped-upgrade-285044                              | minikube                  | jenkins | v1.26.0 | 11 Feb 25 03:05 UTC | 11 Feb 25 03:07 UTC |
	|         | --memory=2200 --vm-driver=kvm2                         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p pause-224871                                        | pause-224871              | jenkins | v1.35.0 | 11 Feb 25 03:06 UTC | 11 Feb 25 03:07 UTC |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-285044 stop                            | minikube                  | jenkins | v1.26.0 | 11 Feb 25 03:07 UTC | 11 Feb 25 03:07 UTC |
	| start   | -p stopped-upgrade-285044                              | stopped-upgrade-285044    | jenkins | v1.35.0 | 11 Feb 25 03:07 UTC | 11 Feb 25 03:07 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p cert-expiration-411526                              | cert-expiration-411526    | jenkins | v1.35.0 | 11 Feb 25 03:07 UTC |                     |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| pause   | -p pause-224871                                        | pause-224871              | jenkins | v1.35.0 | 11 Feb 25 03:07 UTC | 11 Feb 25 03:07 UTC |
	|         | --alsologtostderr -v=5                                 |                           |         |         |                     |                     |
	| unpause | -p pause-224871                                        | pause-224871              | jenkins | v1.35.0 | 11 Feb 25 03:07 UTC | 11 Feb 25 03:07 UTC |
	|         | --alsologtostderr -v=5                                 |                           |         |         |                     |                     |
	| pause   | -p pause-224871                                        | pause-224871              | jenkins | v1.35.0 | 11 Feb 25 03:07 UTC | 11 Feb 25 03:07 UTC |
	|         | --alsologtostderr -v=5                                 |                           |         |         |                     |                     |
	| delete  | -p pause-224871                                        | pause-224871              | jenkins | v1.35.0 | 11 Feb 25 03:07 UTC | 11 Feb 25 03:07 UTC |
	|         | --alsologtostderr -v=5                                 |                           |         |         |                     |                     |
	| delete  | -p pause-224871                                        | pause-224871              | jenkins | v1.35.0 | 11 Feb 25 03:07 UTC | 11 Feb 25 03:07 UTC |
	| start   | -p old-k8s-version-244815                              | old-k8s-version-244815    | jenkins | v1.35.0 | 11 Feb 25 03:07 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-285044                              | stopped-upgrade-285044    | jenkins | v1.35.0 | 11 Feb 25 03:07 UTC | 11 Feb 25 03:07 UTC |
	| start   | -p no-preload-214316                                   | no-preload-214316         | jenkins | v1.35.0 | 11 Feb 25 03:07 UTC | 11 Feb 25 03:09 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-214316             | no-preload-214316         | jenkins | v1.35.0 | 11 Feb 25 03:09 UTC | 11 Feb 25 03:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-214316                                   | no-preload-214316         | jenkins | v1.35.0 | 11 Feb 25 03:09 UTC | 11 Feb 25 03:11 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-241335                           | kubernetes-upgrade-241335 | jenkins | v1.35.0 | 11 Feb 25 03:10 UTC | 11 Feb 25 03:10 UTC |
	| start   | -p kubernetes-upgrade-241335                           | kubernetes-upgrade-241335 | jenkins | v1.35.0 | 11 Feb 25 03:10 UTC | 11 Feb 25 03:10 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-241335                           | kubernetes-upgrade-241335 | jenkins | v1.35.0 | 11 Feb 25 03:10 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-241335                           | kubernetes-upgrade-241335 | jenkins | v1.35.0 | 11 Feb 25 03:10 UTC | 11 Feb 25 03:11 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio                               |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-214316                  | no-preload-214316         | jenkins | v1.35.0 | 11 Feb 25 03:11 UTC | 11 Feb 25 03:11 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-214316                                   | no-preload-214316         | jenkins | v1.35.0 | 11 Feb 25 03:11 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 03:11:01
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 03:11:01.347862   61971 out.go:345] Setting OutFile to fd 1 ...
	I0211 03:11:01.347986   61971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:11:01.347998   61971 out.go:358] Setting ErrFile to fd 2...
	I0211 03:11:01.348004   61971 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:11:01.348273   61971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 03:11:01.348988   61971 out.go:352] Setting JSON to false
	I0211 03:11:01.350328   61971 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6812,"bootTime":1739236649,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 03:11:01.350462   61971 start.go:139] virtualization: kvm guest
	I0211 03:11:01.352647   61971 out.go:177] * [no-preload-214316] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 03:11:01.353928   61971 notify.go:220] Checking for updates...
	I0211 03:11:01.353973   61971 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 03:11:01.355205   61971 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 03:11:01.356522   61971 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:11:01.357744   61971 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:11:01.358893   61971 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 03:11:01.360108   61971 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 03:11:01.361774   61971 config.go:182] Loaded profile config "no-preload-214316": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:11:01.362364   61971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:11:01.362440   61971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:11:01.377336   61971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41047
	I0211 03:11:01.377877   61971 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:11:01.378589   61971 main.go:141] libmachine: Using API Version  1
	I0211 03:11:01.378617   61971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:11:01.379063   61971 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:11:01.379284   61971 main.go:141] libmachine: (no-preload-214316) Calling .DriverName
	I0211 03:11:01.379555   61971 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 03:11:01.379851   61971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:11:01.379884   61971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:11:01.394447   61971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I0211 03:11:01.394957   61971 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:11:01.395494   61971 main.go:141] libmachine: Using API Version  1
	I0211 03:11:01.395521   61971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:11:01.395849   61971 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:11:01.396048   61971 main.go:141] libmachine: (no-preload-214316) Calling .DriverName
	I0211 03:11:01.431130   61971 out.go:177] * Using the kvm2 driver based on existing profile
	I0211 03:11:01.432276   61971 start.go:297] selected driver: kvm2
	I0211 03:11:01.432297   61971 start.go:901] validating driver "kvm2" against &{Name:no-preload-214316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-214316 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:11:01.432400   61971 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 03:11:01.433128   61971 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:11:01.433202   61971 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20400-12456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0211 03:11:01.448343   61971 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0211 03:11:01.448721   61971 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:11:01.448751   61971 cni.go:84] Creating CNI manager for ""
	I0211 03:11:01.448796   61971 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 03:11:01.448836   61971 start.go:340] cluster config:
	{Name:no-preload-214316 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:no-preload-214316 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.66 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:11:01.448938   61971 iso.go:125] acquiring lock: {Name:mkf866c6e52b4efa55cc59a9f329105471716f9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:11:01.450489   61971 out.go:177] * Starting "no-preload-214316" primary control-plane node in "no-preload-214316" cluster
	I0211 03:11:01.451644   61971 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 03:11:01.451765   61971 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/config.json ...
	I0211 03:11:01.451899   61971 cache.go:107] acquiring lock: {Name:mkb443d25ec4124decbe08df12228c01574a10f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:11:01.451932   61971 cache.go:107] acquiring lock: {Name:mk3f81dc96af5224d0b32b1884f6fecf2afb26a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:11:01.451898   61971 cache.go:107] acquiring lock: {Name:mkd6f6e04f708245c058c1042c1c4cca27d49cfa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:11:01.451945   61971 cache.go:107] acquiring lock: {Name:mk35a8a64191333b556a53f197bd3a48fd92712c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:11:01.451990   61971 cache.go:115] /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 exists
	I0211 03:11:01.451995   61971 cache.go:115] /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0211 03:11:01.452006   61971 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.32.1" -> "/home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1" took 112.779µs
	I0211 03:11:01.452011   61971 start.go:360] acquireMachinesLock for no-preload-214316: {Name:mk0cbf79bfabdb28d0a301765db34c154a72eff0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0211 03:11:01.452022   61971 cache.go:115] /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 exists
	I0211 03:11:01.452031   61971 cache.go:115] /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0211 03:11:01.452004   61971 cache.go:107] acquiring lock: {Name:mk674019c3cf53eeb0fd7b337f1d8581bae60921 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:11:01.452017   61971 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.32.1 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.32.1 succeeded
	I0211 03:11:01.452005   61971 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 114.572µs
	I0211 03:11:01.452031   61971 cache.go:107] acquiring lock: {Name:mk6b7be268865e963974771560730da000f0403b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:11:01.452040   61971 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 110.359µs
	I0211 03:11:01.452052   61971 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0211 03:11:01.452057   61971 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0211 03:11:01.452043   61971 start.go:364] duration metric: took 17.585µs to acquireMachinesLock for "no-preload-214316"
	I0211 03:11:01.452032   61971 cache.go:107] acquiring lock: {Name:mk10a9e40829ae3606176eefabc3753605abd1cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:11:01.452074   61971 start.go:96] Skipping create...Using existing machine configuration
	I0211 03:11:01.452081   61971 fix.go:54] fixHost starting: 
	I0211 03:11:01.452031   61971 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.32.1" -> "/home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1" took 93.726µs
	I0211 03:11:01.452127   61971 cache.go:115] /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 exists
	I0211 03:11:01.452138   61971 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.32.1 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.32.1 succeeded
	I0211 03:11:01.452144   61971 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.32.1" -> "/home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1" took 115.309µs
	I0211 03:11:01.452158   61971 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.32.1 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.32.1 succeeded
	I0211 03:11:01.452167   61971 cache.go:115] /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 exists
	I0211 03:11:01.452187   61971 cache.go:96] cache image "registry.k8s.io/etcd:3.5.16-0" -> "/home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0" took 207.534µs
	I0211 03:11:01.452188   61971 cache.go:115] /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 exists
	I0211 03:11:01.452196   61971 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.16-0 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.16-0 succeeded
	I0211 03:11:01.452204   61971 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.32.1" -> "/home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1" took 223.25µs
	I0211 03:11:01.452215   61971 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.32.1 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.32.1 succeeded
	I0211 03:11:01.452276   61971 cache.go:107] acquiring lock: {Name:mkef206d5aca25dcb9d98cfd9229c83c85206657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:11:01.452357   61971 cache.go:115] /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0211 03:11:01.452366   61971 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 119.917µs
	I0211 03:11:01.452378   61971 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0211 03:11:01.452394   61971 cache.go:87] Successfully saved all images to host disk.
	I0211 03:11:01.452420   61971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:11:01.452448   61971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:11:01.466431   61971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37089
	I0211 03:11:01.466772   61971 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:11:01.467232   61971 main.go:141] libmachine: Using API Version  1
	I0211 03:11:01.467262   61971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:11:01.467622   61971 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:11:01.467819   61971 main.go:141] libmachine: (no-preload-214316) Calling .DriverName
	I0211 03:11:01.467948   61971 main.go:141] libmachine: (no-preload-214316) Calling .GetState
	I0211 03:11:01.469488   61971 fix.go:112] recreateIfNeeded on no-preload-214316: state=Stopped err=<nil>
	I0211 03:11:01.469520   61971 main.go:141] libmachine: (no-preload-214316) Calling .DriverName
	W0211 03:11:01.469638   61971 fix.go:138] unexpected machine state, will restart: <nil>
	I0211 03:11:01.472124   61971 out.go:177] * Restarting existing kvm2 VM for "no-preload-214316" ...
	I0211 03:10:59.212116   61831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0211 03:10:59.212145   61831 machine.go:96] duration metric: took 10.268022731s to provisionDockerMachine
	I0211 03:10:59.212158   61831 start.go:293] postStartSetup for "kubernetes-upgrade-241335" (driver="kvm2")
	I0211 03:10:59.212191   61831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0211 03:10:59.212215   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:10:59.212537   61831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0211 03:10:59.212560   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:10:59.215784   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:10:59.216215   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:10:18 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:10:59.216245   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:10:59.216416   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:10:59.216577   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:10:59.216729   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:10:59.216847   61831 sshutil.go:53] new ssh client: &{IP:192.168.50.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/id_rsa Username:docker}
	I0211 03:10:59.297757   61831 ssh_runner.go:195] Run: cat /etc/os-release
	I0211 03:10:59.301833   61831 info.go:137] Remote host: Buildroot 2023.02.9
	I0211 03:10:59.301859   61831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/addons for local assets ...
	I0211 03:10:59.301934   61831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/files for local assets ...
	I0211 03:10:59.302089   61831 filesync.go:149] local asset: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem -> 196452.pem in /etc/ssl/certs
	I0211 03:10:59.302229   61831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0211 03:10:59.311271   61831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:10:59.333811   61831 start.go:296] duration metric: took 121.636632ms for postStartSetup
	I0211 03:10:59.333856   61831 fix.go:56] duration metric: took 10.410604799s for fixHost
	I0211 03:10:59.333881   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:10:59.337076   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:10:59.337473   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:10:18 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:10:59.337502   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:10:59.337621   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:10:59.337819   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:10:59.337997   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:10:59.338139   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:10:59.338297   61831 main.go:141] libmachine: Using SSH client type: native
	I0211 03:10:59.338505   61831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.50.243 22 <nil> <nil>}
	I0211 03:10:59.338521   61831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0211 03:10:59.435272   61831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739243459.426383268
	
	I0211 03:10:59.435293   61831 fix.go:216] guest clock: 1739243459.426383268
	I0211 03:10:59.435300   61831 fix.go:229] Guest: 2025-02-11 03:10:59.426383268 +0000 UTC Remote: 2025-02-11 03:10:59.333861832 +0000 UTC m=+10.553787674 (delta=92.521436ms)
	I0211 03:10:59.435342   61831 fix.go:200] guest clock delta is within tolerance: 92.521436ms
	I0211 03:10:59.435349   61831 start.go:83] releasing machines lock for "kubernetes-upgrade-241335", held for 10.51211428s
	I0211 03:10:59.435373   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:10:59.435656   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetIP
	I0211 03:10:59.438203   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:10:59.438559   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:10:18 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:10:59.438596   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:10:59.438733   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:10:59.439239   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:10:59.439412   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .DriverName
	I0211 03:10:59.439533   61831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0211 03:10:59.439584   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:10:59.439609   61831 ssh_runner.go:195] Run: cat /version.json
	I0211 03:10:59.439628   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHHostname
	I0211 03:10:59.442303   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:10:59.442622   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:10:18 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:10:59.442647   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:10:59.442666   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:10:59.442793   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:10:59.442965   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:10:59.443116   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:10:59.443175   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:10:18 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:10:59.443201   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:10:59.443247   61831 sshutil.go:53] new ssh client: &{IP:192.168.50.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/id_rsa Username:docker}
	I0211 03:10:59.443320   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHPort
	I0211 03:10:59.443447   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHKeyPath
	I0211 03:10:59.443596   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetSSHUsername
	I0211 03:10:59.443739   61831 sshutil.go:53] new ssh client: &{IP:192.168.50.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/kubernetes-upgrade-241335/id_rsa Username:docker}
	I0211 03:10:59.515526   61831 ssh_runner.go:195] Run: systemctl --version
	I0211 03:10:59.535558   61831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0211 03:10:59.685837   61831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0211 03:10:59.691643   61831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0211 03:10:59.691750   61831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 03:10:59.700774   61831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0211 03:10:59.700798   61831 start.go:495] detecting cgroup driver to use...
	I0211 03:10:59.700864   61831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0211 03:10:59.717307   61831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0211 03:10:59.730553   61831 docker.go:217] disabling cri-docker service (if available) ...
	I0211 03:10:59.730602   61831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0211 03:10:59.744050   61831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0211 03:10:59.757516   61831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0211 03:10:59.884748   61831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0211 03:11:00.013785   61831 docker.go:233] disabling docker service ...
	I0211 03:11:00.013866   61831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0211 03:11:00.030293   61831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0211 03:11:00.043030   61831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0211 03:11:00.174541   61831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0211 03:11:00.305178   61831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0211 03:11:00.319089   61831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0211 03:11:00.336800   61831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0211 03:11:00.336858   61831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:11:00.347009   61831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0211 03:11:00.347064   61831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:11:00.357026   61831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:11:00.366941   61831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:11:00.376716   61831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0211 03:11:00.386581   61831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:11:00.396056   61831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:11:00.405803   61831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:11:00.415524   61831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0211 03:11:00.424667   61831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0211 03:11:00.437692   61831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:11:00.574776   61831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0211 03:11:04.894112   61831 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.31929798s)
	I0211 03:11:04.894142   61831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0211 03:11:04.894189   61831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0211 03:11:04.898796   61831 start.go:563] Will wait 60s for crictl version
	I0211 03:11:04.898848   61831 ssh_runner.go:195] Run: which crictl
	I0211 03:11:04.903103   61831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0211 03:11:04.940423   61831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0211 03:11:04.940521   61831 ssh_runner.go:195] Run: crio --version
	I0211 03:11:04.968308   61831 ssh_runner.go:195] Run: crio --version
	I0211 03:11:04.998549   61831 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0211 03:11:01.925860   59687 api_server.go:253] Checking apiserver healthz at https://192.168.72.237:8443/healthz ...
	I0211 03:11:01.926431   59687 api_server.go:269] stopped: https://192.168.72.237:8443/healthz: Get "https://192.168.72.237:8443/healthz": dial tcp 192.168.72.237:8443: connect: connection refused
	I0211 03:11:01.926484   59687 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:11:01.926533   59687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:11:01.962189   59687 cri.go:89] found id: "eb2e430ef7e22d59dd8fba5a533d5884039a60a2f0631b77995096df14918517"
	I0211 03:11:01.962203   59687 cri.go:89] found id: ""
	I0211 03:11:01.962210   59687 logs.go:282] 1 containers: [eb2e430ef7e22d59dd8fba5a533d5884039a60a2f0631b77995096df14918517]
	I0211 03:11:01.962265   59687 ssh_runner.go:195] Run: which crictl
	I0211 03:11:01.966155   59687 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:11:01.966208   59687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:11:02.002047   59687 cri.go:89] found id: "2ee09862f1deb93facef388bc109006df1a74537d60642926f59eddfbf77bd3e"
	I0211 03:11:02.002057   59687 cri.go:89] found id: ""
	I0211 03:11:02.002062   59687 logs.go:282] 1 containers: [2ee09862f1deb93facef388bc109006df1a74537d60642926f59eddfbf77bd3e]
	I0211 03:11:02.002107   59687 ssh_runner.go:195] Run: which crictl
	I0211 03:11:02.006111   59687 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:11:02.006169   59687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:11:02.047726   59687 cri.go:89] found id: "47b627caa4e625082da44a90e425c239092fd954acfe7b76d88a7eb80baa95c5"
	I0211 03:11:02.047741   59687 cri.go:89] found id: ""
	I0211 03:11:02.047749   59687 logs.go:282] 1 containers: [47b627caa4e625082da44a90e425c239092fd954acfe7b76d88a7eb80baa95c5]
	I0211 03:11:02.047798   59687 ssh_runner.go:195] Run: which crictl
	I0211 03:11:02.051827   59687 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:11:02.051873   59687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:11:02.090055   59687 cri.go:89] found id: "3042273f24524143b8fe6dcc224b6e97d1065c3a43fde70489808c1aa4fad41f"
	I0211 03:11:02.090067   59687 cri.go:89] found id: "b5596c6f1c8e18e5b56d4a5e920bbd7b8cc6b76551547eff1631a3cb896499a3"
	I0211 03:11:02.090071   59687 cri.go:89] found id: ""
	I0211 03:11:02.090079   59687 logs.go:282] 2 containers: [3042273f24524143b8fe6dcc224b6e97d1065c3a43fde70489808c1aa4fad41f b5596c6f1c8e18e5b56d4a5e920bbd7b8cc6b76551547eff1631a3cb896499a3]
	I0211 03:11:02.090128   59687 ssh_runner.go:195] Run: which crictl
	I0211 03:11:02.094430   59687 ssh_runner.go:195] Run: which crictl
	I0211 03:11:02.098293   59687 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:11:02.098344   59687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:11:02.134323   59687 cri.go:89] found id: "35f673a27faec73592b515e17c31452aea3592df20a3aae13976448c0f6ecf9d"
	I0211 03:11:02.134333   59687 cri.go:89] found id: ""
	I0211 03:11:02.134339   59687 logs.go:282] 1 containers: [35f673a27faec73592b515e17c31452aea3592df20a3aae13976448c0f6ecf9d]
	I0211 03:11:02.134378   59687 ssh_runner.go:195] Run: which crictl
	I0211 03:11:02.138281   59687 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:11:02.138332   59687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:11:02.178593   59687 cri.go:89] found id: "15778716001f3553cdbf5f030d4130e4f643ee2bad1e999d5e9fe2f1ee462e97"
	I0211 03:11:02.178606   59687 cri.go:89] found id: ""
	I0211 03:11:02.178614   59687 logs.go:282] 1 containers: [15778716001f3553cdbf5f030d4130e4f643ee2bad1e999d5e9fe2f1ee462e97]
	I0211 03:11:02.178673   59687 ssh_runner.go:195] Run: which crictl
	I0211 03:11:02.183232   59687 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:11:02.183285   59687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:11:02.219431   59687 cri.go:89] found id: ""
	I0211 03:11:02.219446   59687 logs.go:282] 0 containers: []
	W0211 03:11:02.219455   59687 logs.go:284] No container was found matching "kindnet"
	I0211 03:11:02.219461   59687 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0211 03:11:02.219520   59687 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0211 03:11:02.255831   59687 cri.go:89] found id: "6c1d24d19d6c7a282543134a0b6d016775e484ea5f4595c699adf2c50b9d554a"
	I0211 03:11:02.255844   59687 cri.go:89] found id: ""
	I0211 03:11:02.255852   59687 logs.go:282] 1 containers: [6c1d24d19d6c7a282543134a0b6d016775e484ea5f4595c699adf2c50b9d554a]
	I0211 03:11:02.255906   59687 ssh_runner.go:195] Run: which crictl
	I0211 03:11:02.261797   59687 logs.go:123] Gathering logs for kubelet ...
	I0211 03:11:02.261811   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:11:02.358521   59687 logs.go:123] Gathering logs for dmesg ...
	I0211 03:11:02.358538   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:11:02.371741   59687 logs.go:123] Gathering logs for coredns [47b627caa4e625082da44a90e425c239092fd954acfe7b76d88a7eb80baa95c5] ...
	I0211 03:11:02.371761   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47b627caa4e625082da44a90e425c239092fd954acfe7b76d88a7eb80baa95c5"
	I0211 03:11:02.403238   59687 logs.go:123] Gathering logs for kube-proxy [35f673a27faec73592b515e17c31452aea3592df20a3aae13976448c0f6ecf9d] ...
	I0211 03:11:02.403253   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35f673a27faec73592b515e17c31452aea3592df20a3aae13976448c0f6ecf9d"
	I0211 03:11:02.438891   59687 logs.go:123] Gathering logs for kube-controller-manager [15778716001f3553cdbf5f030d4130e4f643ee2bad1e999d5e9fe2f1ee462e97] ...
	I0211 03:11:02.438912   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15778716001f3553cdbf5f030d4130e4f643ee2bad1e999d5e9fe2f1ee462e97"
	I0211 03:11:02.487392   59687 logs.go:123] Gathering logs for storage-provisioner [6c1d24d19d6c7a282543134a0b6d016775e484ea5f4595c699adf2c50b9d554a] ...
	I0211 03:11:02.487409   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c1d24d19d6c7a282543134a0b6d016775e484ea5f4595c699adf2c50b9d554a"
	I0211 03:11:02.532671   59687 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:11:02.532691   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:11:02.811649   59687 logs.go:123] Gathering logs for container status ...
	I0211 03:11:02.811665   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:11:02.850575   59687 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:11:02.850591   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:11:02.927711   59687 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:11:02.927723   59687 logs.go:123] Gathering logs for kube-apiserver [eb2e430ef7e22d59dd8fba5a533d5884039a60a2f0631b77995096df14918517] ...
	I0211 03:11:02.927739   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb2e430ef7e22d59dd8fba5a533d5884039a60a2f0631b77995096df14918517"
	I0211 03:11:02.973530   59687 logs.go:123] Gathering logs for etcd [2ee09862f1deb93facef388bc109006df1a74537d60642926f59eddfbf77bd3e] ...
	I0211 03:11:02.973556   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ee09862f1deb93facef388bc109006df1a74537d60642926f59eddfbf77bd3e"
	I0211 03:11:03.036550   59687 logs.go:123] Gathering logs for kube-scheduler [3042273f24524143b8fe6dcc224b6e97d1065c3a43fde70489808c1aa4fad41f] ...
	I0211 03:11:03.036566   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3042273f24524143b8fe6dcc224b6e97d1065c3a43fde70489808c1aa4fad41f"
	I0211 03:11:03.107746   59687 logs.go:123] Gathering logs for kube-scheduler [b5596c6f1c8e18e5b56d4a5e920bbd7b8cc6b76551547eff1631a3cb896499a3] ...
	I0211 03:11:03.107770   59687 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5596c6f1c8e18e5b56d4a5e920bbd7b8cc6b76551547eff1631a3cb896499a3"
	I0211 03:11:01.473145   61971 main.go:141] libmachine: (no-preload-214316) Calling .Start
	I0211 03:11:01.473334   61971 main.go:141] libmachine: (no-preload-214316) starting domain...
	I0211 03:11:01.473354   61971 main.go:141] libmachine: (no-preload-214316) ensuring networks are active...
	I0211 03:11:01.474063   61971 main.go:141] libmachine: (no-preload-214316) Ensuring network default is active
	I0211 03:11:01.474435   61971 main.go:141] libmachine: (no-preload-214316) Ensuring network mk-no-preload-214316 is active
	I0211 03:11:01.474819   61971 main.go:141] libmachine: (no-preload-214316) getting domain XML...
	I0211 03:11:01.475631   61971 main.go:141] libmachine: (no-preload-214316) creating domain...
	I0211 03:11:02.742688   61971 main.go:141] libmachine: (no-preload-214316) waiting for IP...
	I0211 03:11:02.743698   61971 main.go:141] libmachine: (no-preload-214316) DBG | domain no-preload-214316 has defined MAC address 52:54:00:d6:d2:62 in network mk-no-preload-214316
	I0211 03:11:02.744112   61971 main.go:141] libmachine: (no-preload-214316) DBG | unable to find current IP address of domain no-preload-214316 in network mk-no-preload-214316
	I0211 03:11:02.744220   61971 main.go:141] libmachine: (no-preload-214316) DBG | I0211 03:11:02.744098   62006 retry.go:31] will retry after 276.778879ms: waiting for domain to come up
	I0211 03:11:03.022588   61971 main.go:141] libmachine: (no-preload-214316) DBG | domain no-preload-214316 has defined MAC address 52:54:00:d6:d2:62 in network mk-no-preload-214316
	I0211 03:11:03.023144   61971 main.go:141] libmachine: (no-preload-214316) DBG | unable to find current IP address of domain no-preload-214316 in network mk-no-preload-214316
	I0211 03:11:03.023172   61971 main.go:141] libmachine: (no-preload-214316) DBG | I0211 03:11:03.023117   62006 retry.go:31] will retry after 301.47021ms: waiting for domain to come up
	I0211 03:11:03.326721   61971 main.go:141] libmachine: (no-preload-214316) DBG | domain no-preload-214316 has defined MAC address 52:54:00:d6:d2:62 in network mk-no-preload-214316
	I0211 03:11:03.327296   61971 main.go:141] libmachine: (no-preload-214316) DBG | unable to find current IP address of domain no-preload-214316 in network mk-no-preload-214316
	I0211 03:11:03.327334   61971 main.go:141] libmachine: (no-preload-214316) DBG | I0211 03:11:03.327263   62006 retry.go:31] will retry after 441.808331ms: waiting for domain to come up
	I0211 03:11:03.770824   61971 main.go:141] libmachine: (no-preload-214316) DBG | domain no-preload-214316 has defined MAC address 52:54:00:d6:d2:62 in network mk-no-preload-214316
	I0211 03:11:03.771358   61971 main.go:141] libmachine: (no-preload-214316) DBG | unable to find current IP address of domain no-preload-214316 in network mk-no-preload-214316
	I0211 03:11:03.771387   61971 main.go:141] libmachine: (no-preload-214316) DBG | I0211 03:11:03.771322   62006 retry.go:31] will retry after 507.112743ms: waiting for domain to come up
	I0211 03:11:04.279751   61971 main.go:141] libmachine: (no-preload-214316) DBG | domain no-preload-214316 has defined MAC address 52:54:00:d6:d2:62 in network mk-no-preload-214316
	I0211 03:11:04.280259   61971 main.go:141] libmachine: (no-preload-214316) DBG | unable to find current IP address of domain no-preload-214316 in network mk-no-preload-214316
	I0211 03:11:04.280279   61971 main.go:141] libmachine: (no-preload-214316) DBG | I0211 03:11:04.280237   62006 retry.go:31] will retry after 580.040436ms: waiting for domain to come up
	I0211 03:11:04.862098   61971 main.go:141] libmachine: (no-preload-214316) DBG | domain no-preload-214316 has defined MAC address 52:54:00:d6:d2:62 in network mk-no-preload-214316
	I0211 03:11:04.862683   61971 main.go:141] libmachine: (no-preload-214316) DBG | unable to find current IP address of domain no-preload-214316 in network mk-no-preload-214316
	I0211 03:11:04.862714   61971 main.go:141] libmachine: (no-preload-214316) DBG | I0211 03:11:04.862648   62006 retry.go:31] will retry after 766.941436ms: waiting for domain to come up
	I0211 03:11:05.631757   61971 main.go:141] libmachine: (no-preload-214316) DBG | domain no-preload-214316 has defined MAC address 52:54:00:d6:d2:62 in network mk-no-preload-214316
	I0211 03:11:05.632345   61971 main.go:141] libmachine: (no-preload-214316) DBG | unable to find current IP address of domain no-preload-214316 in network mk-no-preload-214316
	I0211 03:11:05.632375   61971 main.go:141] libmachine: (no-preload-214316) DBG | I0211 03:11:05.632309   62006 retry.go:31] will retry after 1.057041367s: waiting for domain to come up
	I0211 03:11:04.999764   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) Calling .GetIP
	I0211 03:11:05.002585   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:11:05.002968   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:58:16", ip: ""} in network mk-kubernetes-upgrade-241335: {Iface:virbr2 ExpiryTime:2025-02-11 04:10:18 +0000 UTC Type:0 Mac:52:54:00:41:58:16 Iaid: IPaddr:192.168.50.243 Prefix:24 Hostname:kubernetes-upgrade-241335 Clientid:01:52:54:00:41:58:16}
	I0211 03:11:05.002998   61831 main.go:141] libmachine: (kubernetes-upgrade-241335) DBG | domain kubernetes-upgrade-241335 has defined IP address 192.168.50.243 and MAC address 52:54:00:41:58:16 in network mk-kubernetes-upgrade-241335
	I0211 03:11:05.003265   61831 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0211 03:11:05.007533   61831 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-241335 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-241335 Na
mespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.243 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0211 03:11:05.007627   61831 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 03:11:05.007661   61831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:11:05.049773   61831 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 03:11:05.049794   61831 crio.go:433] Images already preloaded, skipping extraction
	I0211 03:11:05.049833   61831 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:11:05.086323   61831 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 03:11:05.086343   61831 cache_images.go:84] Images are preloaded, skipping loading
	I0211 03:11:05.086350   61831 kubeadm.go:934] updating node { 192.168.50.243 8443 v1.32.1 crio true true} ...
	I0211 03:11:05.086438   61831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-241335 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-241335 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0211 03:11:05.086496   61831 ssh_runner.go:195] Run: crio config
	I0211 03:11:05.133598   61831 cni.go:84] Creating CNI manager for ""
	I0211 03:11:05.133625   61831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 03:11:05.133647   61831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 03:11:05.133674   61831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.243 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-241335 NodeName:kubernetes-upgrade-241335 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.243"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.243 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0211 03:11:05.133833   61831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.243
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-241335"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.243"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.243"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 03:11:05.133916   61831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0211 03:11:05.144315   61831 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 03:11:05.144431   61831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 03:11:05.153588   61831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0211 03:11:05.169867   61831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 03:11:05.186563   61831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I0211 03:11:05.203948   61831 ssh_runner.go:195] Run: grep 192.168.50.243	control-plane.minikube.internal$ /etc/hosts
	I0211 03:11:05.207949   61831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:11:05.350494   61831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:11:05.364348   61831 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335 for IP: 192.168.50.243
	I0211 03:11:05.364370   61831 certs.go:194] generating shared ca certs ...
	I0211 03:11:05.364389   61831 certs.go:226] acquiring lock for ca certs: {Name:mk14e70e4f3b98aff6eac535114852cc1d70eb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:11:05.364550   61831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key
	I0211 03:11:05.364600   61831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key
	I0211 03:11:05.364611   61831 certs.go:256] generating profile certs ...
	I0211 03:11:05.364740   61831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/client.key
	I0211 03:11:05.364800   61831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.key.6266be30
	I0211 03:11:05.364854   61831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/proxy-client.key
	I0211 03:11:05.365005   61831 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem (1338 bytes)
	W0211 03:11:05.365042   61831 certs.go:480] ignoring /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645_empty.pem, impossibly tiny 0 bytes
	I0211 03:11:05.365056   61831 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem (1675 bytes)
	I0211 03:11:05.365090   61831 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem (1078 bytes)
	I0211 03:11:05.365128   61831 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem (1123 bytes)
	I0211 03:11:05.365157   61831 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem (1679 bytes)
	I0211 03:11:05.365211   61831 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:11:05.366029   61831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 03:11:05.389908   61831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 03:11:05.412639   61831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 03:11:05.437589   61831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0211 03:11:05.459945   61831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0211 03:11:05.482229   61831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0211 03:11:05.504459   61831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 03:11:05.528180   61831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kubernetes-upgrade-241335/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0211 03:11:05.553733   61831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /usr/share/ca-certificates/196452.pem (1708 bytes)
	I0211 03:11:05.576103   61831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 03:11:05.598495   61831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem --> /usr/share/ca-certificates/19645.pem (1338 bytes)
	I0211 03:11:05.623588   61831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 03:11:05.640456   61831 ssh_runner.go:195] Run: openssl version
	I0211 03:11:05.646014   61831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/196452.pem && ln -fs /usr/share/ca-certificates/196452.pem /etc/ssl/certs/196452.pem"
	I0211 03:11:05.660466   61831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/196452.pem
	I0211 03:11:05.667307   61831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 11 02:09 /usr/share/ca-certificates/196452.pem
	I0211 03:11:05.667366   61831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/196452.pem
	I0211 03:11:05.686771   61831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/196452.pem /etc/ssl/certs/3ec20f2e.0"
	I0211 03:11:05.700338   61831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 03:11:05.758439   61831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:11:05.785959   61831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:11:05.786029   61831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:11:05.798696   61831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 03:11:05.897071   61831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19645.pem && ln -fs /usr/share/ca-certificates/19645.pem /etc/ssl/certs/19645.pem"
	I0211 03:11:06.088491   61831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19645.pem
	I0211 03:11:06.143228   61831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 11 02:09 /usr/share/ca-certificates/19645.pem
	I0211 03:11:06.143356   61831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19645.pem
	I0211 03:11:06.190168   61831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19645.pem /etc/ssl/certs/51391683.0"
	I0211 03:11:06.304880   61831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 03:11:06.338418   61831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0211 03:11:06.399431   61831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0211 03:11:06.461029   61831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0211 03:11:06.540043   61831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0211 03:11:06.582718   61831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0211 03:11:06.616802   61831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0211 03:11:06.667724   61831 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-241335 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:kubernetes-upgrade-241335 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.243 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:11:06.667847   61831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 03:11:06.667919   61831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 03:11:06.899047   61831 cri.go:89] found id: "be7fc4b3ba6ddb74b8a1235e3de5dbf637689b6d8d4e280cf9bf192a70fd3f96"
	I0211 03:11:06.899077   61831 cri.go:89] found id: "c6c94b91bd77fe7d0c6db5805811bdeb70f71839f371a481c95a43a306370d69"
	I0211 03:11:06.899083   61831 cri.go:89] found id: "3b41a4f561256ccdc4b1b0743fff2c16d13c2fd36d3228752dd6e88de05a22dd"
	I0211 03:11:06.899087   61831 cri.go:89] found id: "5f6429fae84a458c55b38c280f2cfddcac33bf170844c9e7cdfc56449a44b7f0"
	I0211 03:11:06.899091   61831 cri.go:89] found id: "863716c5970f6a1254836eb7d7535235541da56bfe65c1b373815a870c5d68a6"
	I0211 03:11:06.899096   61831 cri.go:89] found id: "c11c545665d5eb3a123b51617ec6e51d04898ac20faa5d51b9f7bce1c64d9227"
	I0211 03:11:06.899099   61831 cri.go:89] found id: "bae404edbb4ad0ed048bd33cf444ada181641f4fdc1232c609f61ef6b9597c6e"
	I0211 03:11:06.899104   61831 cri.go:89] found id: "3f6e9326078d030b6515a1f110bc157336d38eccedb679b6cb1673245496b925"
	I0211 03:11:06.899107   61831 cri.go:89] found id: "deba36ef2d80157284b74756a690c3f3e62f06c4c440470d77123ac4cc33a5ab"
	I0211 03:11:06.899123   61831 cri.go:89] found id: "14e3e85b3f1b048e5da22852875827b14280e7c1495adc9bd1490165f7874569"
	I0211 03:11:06.899127   61831 cri.go:89] found id: "d7e63d172203c206e29b3e2dc480dacace3f894bac47d444f1b612d7f6c35b04"
	I0211 03:11:06.899131   61831 cri.go:89] found id: "2e6a3548b440c746f259153aa0fa20dfee05e960dd1391a57c6c6d7abf8795db"
	I0211 03:11:06.899136   61831 cri.go:89] found id: "0f82e196e457c86929123e79a832adb7294e88bc166bad7b23d4e10bf8bcdca0"
	I0211 03:11:06.899140   61831 cri.go:89] found id: "5455a659e4123cc9aabc593efc9ba4f89c440244c8984fe42d04eb0ee4c83c36"
	I0211 03:11:06.899146   61831 cri.go:89] found id: ""
	I0211 03:11:06.899206   61831 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-241335 -n kubernetes-upgrade-241335
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-241335 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-241335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-241335
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-241335: (1.14297195s)
--- FAIL: TestKubernetesUpgrade (383.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (272.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-244815 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-244815 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m32.683486545s)

                                                
                                                
-- stdout --
	* [old-k8s-version-244815] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-244815" primary control-plane node in "old-k8s-version-244815" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 03:07:43.818157   60206 out.go:345] Setting OutFile to fd 1 ...
	I0211 03:07:43.818323   60206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:07:43.818336   60206 out.go:358] Setting ErrFile to fd 2...
	I0211 03:07:43.818343   60206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:07:43.818665   60206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 03:07:43.819472   60206 out.go:352] Setting JSON to false
	I0211 03:07:43.820724   60206 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6615,"bootTime":1739236649,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 03:07:43.820813   60206 start.go:139] virtualization: kvm guest
	I0211 03:07:43.823055   60206 out.go:177] * [old-k8s-version-244815] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 03:07:43.824338   60206 notify.go:220] Checking for updates...
	I0211 03:07:43.824346   60206 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 03:07:43.825648   60206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 03:07:43.826903   60206 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:07:43.828048   60206 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:07:43.829211   60206 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 03:07:43.830344   60206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 03:07:43.832107   60206 config.go:182] Loaded profile config "cert-expiration-411526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:07:43.832261   60206 config.go:182] Loaded profile config "kubernetes-upgrade-241335": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0211 03:07:43.832409   60206 config.go:182] Loaded profile config "stopped-upgrade-285044": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0211 03:07:43.832516   60206 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 03:07:43.869016   60206 out.go:177] * Using the kvm2 driver based on user configuration
	I0211 03:07:43.870286   60206 start.go:297] selected driver: kvm2
	I0211 03:07:43.870304   60206 start.go:901] validating driver "kvm2" against <nil>
	I0211 03:07:43.870319   60206 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 03:07:43.871409   60206 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:07:43.871521   60206 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20400-12456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0211 03:07:43.889228   60206 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0211 03:07:43.889285   60206 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0211 03:07:43.889611   60206 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:07:43.889658   60206 cni.go:84] Creating CNI manager for ""
	I0211 03:07:43.889721   60206 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 03:07:43.889737   60206 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0211 03:07:43.889803   60206 start.go:340] cluster config:
	{Name:old-k8s-version-244815 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-244815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:07:43.889932   60206 iso.go:125] acquiring lock: {Name:mkf866c6e52b4efa55cc59a9f329105471716f9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:07:43.891650   60206 out.go:177] * Starting "old-k8s-version-244815" primary control-plane node in "old-k8s-version-244815" cluster
	I0211 03:07:43.892815   60206 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0211 03:07:43.892867   60206 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0211 03:07:43.892882   60206 cache.go:56] Caching tarball of preloaded images
	I0211 03:07:43.892988   60206 preload.go:172] Found /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0211 03:07:43.893003   60206 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0211 03:07:43.893137   60206 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/config.json ...
	I0211 03:07:43.893167   60206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/config.json: {Name:mk21212be90c06ca8d8a3694b48ce28e8528523b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:07:43.893358   60206 start.go:360] acquireMachinesLock for old-k8s-version-244815: {Name:mk0cbf79bfabdb28d0a301765db34c154a72eff0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0211 03:07:43.893432   60206 start.go:364] duration metric: took 41.474µs to acquireMachinesLock for "old-k8s-version-244815"
	I0211 03:07:43.893461   60206 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-244815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-versi
on-244815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMir
ror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 03:07:43.893550   60206 start.go:125] createHost starting for "" (driver="kvm2")
	I0211 03:07:43.895005   60206 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0211 03:07:43.895194   60206 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:07:43.895249   60206 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:07:43.910696   60206 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44625
	I0211 03:07:43.911240   60206 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:07:43.911938   60206 main.go:141] libmachine: Using API Version  1
	I0211 03:07:43.911964   60206 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:07:43.912398   60206 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:07:43.912650   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetMachineName
	I0211 03:07:43.912831   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:07:43.912985   60206 start.go:159] libmachine.API.Create for "old-k8s-version-244815" (driver="kvm2")
	I0211 03:07:43.913029   60206 client.go:168] LocalClient.Create starting
	I0211 03:07:43.913069   60206 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem
	I0211 03:07:43.913129   60206 main.go:141] libmachine: Decoding PEM data...
	I0211 03:07:43.913155   60206 main.go:141] libmachine: Parsing certificate...
	I0211 03:07:43.913218   60206 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem
	I0211 03:07:43.913248   60206 main.go:141] libmachine: Decoding PEM data...
	I0211 03:07:43.913264   60206 main.go:141] libmachine: Parsing certificate...
	I0211 03:07:43.913297   60206 main.go:141] libmachine: Running pre-create checks...
	I0211 03:07:43.913313   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .PreCreateCheck
	I0211 03:07:43.913659   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetConfigRaw
	I0211 03:07:43.914077   60206 main.go:141] libmachine: Creating machine...
	I0211 03:07:43.914092   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .Create
	I0211 03:07:43.914241   60206 main.go:141] libmachine: (old-k8s-version-244815) creating KVM machine...
	I0211 03:07:43.914256   60206 main.go:141] libmachine: (old-k8s-version-244815) creating network...
	I0211 03:07:43.915729   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found existing default KVM network
	I0211 03:07:43.917864   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:43.917675   60230 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00020b7f0}
	I0211 03:07:43.917898   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | created network xml: 
	I0211 03:07:43.917912   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | <network>
	I0211 03:07:43.917921   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG |   <name>mk-old-k8s-version-244815</name>
	I0211 03:07:43.917931   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG |   <dns enable='no'/>
	I0211 03:07:43.917937   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG |   
	I0211 03:07:43.917948   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0211 03:07:43.917956   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG |     <dhcp>
	I0211 03:07:43.917974   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0211 03:07:43.917990   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG |     </dhcp>
	I0211 03:07:43.917998   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG |   </ip>
	I0211 03:07:43.918007   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG |   
	I0211 03:07:43.918014   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | </network>
	I0211 03:07:43.918020   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | 
	I0211 03:07:43.923128   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | trying to create private KVM network mk-old-k8s-version-244815 192.168.39.0/24...
	I0211 03:07:43.997948   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | private KVM network mk-old-k8s-version-244815 192.168.39.0/24 created
	I0211 03:07:43.997998   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:43.997915   60230 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:07:43.998015   60206 main.go:141] libmachine: (old-k8s-version-244815) setting up store path in /home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815 ...
	I0211 03:07:43.998042   60206 main.go:141] libmachine: (old-k8s-version-244815) building disk image from file:///home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0211 03:07:43.998160   60206 main.go:141] libmachine: (old-k8s-version-244815) Downloading /home/jenkins/minikube-integration/20400-12456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0211 03:07:44.264191   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:44.264023   60230 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa...
	I0211 03:07:44.481267   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:44.481098   60230 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/old-k8s-version-244815.rawdisk...
	I0211 03:07:44.481304   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | Writing magic tar header
	I0211 03:07:44.481327   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | Writing SSH key tar header
	I0211 03:07:44.481339   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:44.481289   60230 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815 ...
	I0211 03:07:44.481464   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815
	I0211 03:07:44.481515   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube/machines
	I0211 03:07:44.481535   60206 main.go:141] libmachine: (old-k8s-version-244815) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815 (perms=drwx------)
	I0211 03:07:44.481562   60206 main.go:141] libmachine: (old-k8s-version-244815) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube/machines (perms=drwxr-xr-x)
	I0211 03:07:44.481585   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:07:44.481601   60206 main.go:141] libmachine: (old-k8s-version-244815) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube (perms=drwxr-xr-x)
	I0211 03:07:44.481621   60206 main.go:141] libmachine: (old-k8s-version-244815) setting executable bit set on /home/jenkins/minikube-integration/20400-12456 (perms=drwxrwxr-x)
	I0211 03:07:44.481636   60206 main.go:141] libmachine: (old-k8s-version-244815) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0211 03:07:44.481653   60206 main.go:141] libmachine: (old-k8s-version-244815) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0211 03:07:44.481682   60206 main.go:141] libmachine: (old-k8s-version-244815) creating domain...
	I0211 03:07:44.481697   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456
	I0211 03:07:44.481714   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0211 03:07:44.481729   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | checking permissions on dir: /home/jenkins
	I0211 03:07:44.481739   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | checking permissions on dir: /home
	I0211 03:07:44.481753   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | skipping /home - not owner
	I0211 03:07:44.482827   60206 main.go:141] libmachine: (old-k8s-version-244815) define libvirt domain using xml: 
	I0211 03:07:44.482863   60206 main.go:141] libmachine: (old-k8s-version-244815) <domain type='kvm'>
	I0211 03:07:44.482903   60206 main.go:141] libmachine: (old-k8s-version-244815)   <name>old-k8s-version-244815</name>
	I0211 03:07:44.482920   60206 main.go:141] libmachine: (old-k8s-version-244815)   <memory unit='MiB'>2200</memory>
	I0211 03:07:44.482933   60206 main.go:141] libmachine: (old-k8s-version-244815)   <vcpu>2</vcpu>
	I0211 03:07:44.482943   60206 main.go:141] libmachine: (old-k8s-version-244815)   <features>
	I0211 03:07:44.482953   60206 main.go:141] libmachine: (old-k8s-version-244815)     <acpi/>
	I0211 03:07:44.482963   60206 main.go:141] libmachine: (old-k8s-version-244815)     <apic/>
	I0211 03:07:44.482978   60206 main.go:141] libmachine: (old-k8s-version-244815)     <pae/>
	I0211 03:07:44.482988   60206 main.go:141] libmachine: (old-k8s-version-244815)     
	I0211 03:07:44.483004   60206 main.go:141] libmachine: (old-k8s-version-244815)   </features>
	I0211 03:07:44.483015   60206 main.go:141] libmachine: (old-k8s-version-244815)   <cpu mode='host-passthrough'>
	I0211 03:07:44.483029   60206 main.go:141] libmachine: (old-k8s-version-244815)   
	I0211 03:07:44.483039   60206 main.go:141] libmachine: (old-k8s-version-244815)   </cpu>
	I0211 03:07:44.483048   60206 main.go:141] libmachine: (old-k8s-version-244815)   <os>
	I0211 03:07:44.483058   60206 main.go:141] libmachine: (old-k8s-version-244815)     <type>hvm</type>
	I0211 03:07:44.483071   60206 main.go:141] libmachine: (old-k8s-version-244815)     <boot dev='cdrom'/>
	I0211 03:07:44.483082   60206 main.go:141] libmachine: (old-k8s-version-244815)     <boot dev='hd'/>
	I0211 03:07:44.483111   60206 main.go:141] libmachine: (old-k8s-version-244815)     <bootmenu enable='no'/>
	I0211 03:07:44.483136   60206 main.go:141] libmachine: (old-k8s-version-244815)   </os>
	I0211 03:07:44.483151   60206 main.go:141] libmachine: (old-k8s-version-244815)   <devices>
	I0211 03:07:44.483164   60206 main.go:141] libmachine: (old-k8s-version-244815)     <disk type='file' device='cdrom'>
	I0211 03:07:44.483182   60206 main.go:141] libmachine: (old-k8s-version-244815)       <source file='/home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/boot2docker.iso'/>
	I0211 03:07:44.483193   60206 main.go:141] libmachine: (old-k8s-version-244815)       <target dev='hdc' bus='scsi'/>
	I0211 03:07:44.483202   60206 main.go:141] libmachine: (old-k8s-version-244815)       <readonly/>
	I0211 03:07:44.483215   60206 main.go:141] libmachine: (old-k8s-version-244815)     </disk>
	I0211 03:07:44.483229   60206 main.go:141] libmachine: (old-k8s-version-244815)     <disk type='file' device='disk'>
	I0211 03:07:44.483296   60206 main.go:141] libmachine: (old-k8s-version-244815)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0211 03:07:44.483317   60206 main.go:141] libmachine: (old-k8s-version-244815)       <source file='/home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/old-k8s-version-244815.rawdisk'/>
	I0211 03:07:44.483328   60206 main.go:141] libmachine: (old-k8s-version-244815)       <target dev='hda' bus='virtio'/>
	I0211 03:07:44.483341   60206 main.go:141] libmachine: (old-k8s-version-244815)     </disk>
	I0211 03:07:44.483353   60206 main.go:141] libmachine: (old-k8s-version-244815)     <interface type='network'>
	I0211 03:07:44.483375   60206 main.go:141] libmachine: (old-k8s-version-244815)       <source network='mk-old-k8s-version-244815'/>
	I0211 03:07:44.483388   60206 main.go:141] libmachine: (old-k8s-version-244815)       <model type='virtio'/>
	I0211 03:07:44.483402   60206 main.go:141] libmachine: (old-k8s-version-244815)     </interface>
	I0211 03:07:44.483423   60206 main.go:141] libmachine: (old-k8s-version-244815)     <interface type='network'>
	I0211 03:07:44.483442   60206 main.go:141] libmachine: (old-k8s-version-244815)       <source network='default'/>
	I0211 03:07:44.483455   60206 main.go:141] libmachine: (old-k8s-version-244815)       <model type='virtio'/>
	I0211 03:07:44.483466   60206 main.go:141] libmachine: (old-k8s-version-244815)     </interface>
	I0211 03:07:44.483479   60206 main.go:141] libmachine: (old-k8s-version-244815)     <serial type='pty'>
	I0211 03:07:44.483490   60206 main.go:141] libmachine: (old-k8s-version-244815)       <target port='0'/>
	I0211 03:07:44.483502   60206 main.go:141] libmachine: (old-k8s-version-244815)     </serial>
	I0211 03:07:44.483515   60206 main.go:141] libmachine: (old-k8s-version-244815)     <console type='pty'>
	I0211 03:07:44.483528   60206 main.go:141] libmachine: (old-k8s-version-244815)       <target type='serial' port='0'/>
	I0211 03:07:44.483536   60206 main.go:141] libmachine: (old-k8s-version-244815)     </console>
	I0211 03:07:44.483549   60206 main.go:141] libmachine: (old-k8s-version-244815)     <rng model='virtio'>
	I0211 03:07:44.483562   60206 main.go:141] libmachine: (old-k8s-version-244815)       <backend model='random'>/dev/random</backend>
	I0211 03:07:44.483574   60206 main.go:141] libmachine: (old-k8s-version-244815)     </rng>
	I0211 03:07:44.483589   60206 main.go:141] libmachine: (old-k8s-version-244815)     
	I0211 03:07:44.483600   60206 main.go:141] libmachine: (old-k8s-version-244815)     
	I0211 03:07:44.483608   60206 main.go:141] libmachine: (old-k8s-version-244815)   </devices>
	I0211 03:07:44.483627   60206 main.go:141] libmachine: (old-k8s-version-244815) </domain>
	I0211 03:07:44.483637   60206 main.go:141] libmachine: (old-k8s-version-244815) 
	I0211 03:07:44.488568   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:70:82:cd in network default
	I0211 03:07:44.489201   60206 main.go:141] libmachine: (old-k8s-version-244815) starting domain...
	I0211 03:07:44.489221   60206 main.go:141] libmachine: (old-k8s-version-244815) ensuring networks are active...
	I0211 03:07:44.489234   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:44.490077   60206 main.go:141] libmachine: (old-k8s-version-244815) Ensuring network default is active
	I0211 03:07:44.490439   60206 main.go:141] libmachine: (old-k8s-version-244815) Ensuring network mk-old-k8s-version-244815 is active
	I0211 03:07:44.491049   60206 main.go:141] libmachine: (old-k8s-version-244815) getting domain XML...
	I0211 03:07:44.491874   60206 main.go:141] libmachine: (old-k8s-version-244815) creating domain...
	I0211 03:07:45.767354   60206 main.go:141] libmachine: (old-k8s-version-244815) waiting for IP...
	I0211 03:07:45.768323   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:45.768841   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:45.768907   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:45.768836   60230 retry.go:31] will retry after 258.626527ms: waiting for domain to come up
	I0211 03:07:46.029311   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:46.029864   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:46.029905   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:46.029852   60230 retry.go:31] will retry after 258.036304ms: waiting for domain to come up
	I0211 03:07:46.289058   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:46.289529   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:46.289634   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:46.289507   60230 retry.go:31] will retry after 486.820956ms: waiting for domain to come up
	I0211 03:07:46.778125   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:46.778622   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:46.778655   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:46.778599   60230 retry.go:31] will retry after 600.448569ms: waiting for domain to come up
	I0211 03:07:47.380970   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:47.381537   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:47.381589   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:47.381517   60230 retry.go:31] will retry after 536.710117ms: waiting for domain to come up
	I0211 03:07:47.920573   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:47.921090   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:47.921150   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:47.921071   60230 retry.go:31] will retry after 759.536718ms: waiting for domain to come up
	I0211 03:07:48.681892   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:48.682434   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:48.682465   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:48.682403   60230 retry.go:31] will retry after 1.044619581s: waiting for domain to come up
	I0211 03:07:49.728328   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:49.728921   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:49.728947   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:49.728885   60230 retry.go:31] will retry after 1.017398761s: waiting for domain to come up
	I0211 03:07:50.748004   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:50.748485   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:50.748527   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:50.748463   60230 retry.go:31] will retry after 1.625616563s: waiting for domain to come up
	I0211 03:07:52.375736   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:52.376264   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:52.376292   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:52.376238   60230 retry.go:31] will retry after 1.56383072s: waiting for domain to come up
	I0211 03:07:53.941705   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:53.942301   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:53.942374   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:53.942289   60230 retry.go:31] will retry after 2.260396512s: waiting for domain to come up
	I0211 03:07:56.204068   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:56.204418   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:56.204450   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:56.204371   60230 retry.go:31] will retry after 2.622298529s: waiting for domain to come up
	I0211 03:07:58.935734   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:07:58.936151   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:07:58.936222   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:07:58.936135   60230 retry.go:31] will retry after 3.938434562s: waiting for domain to come up
	I0211 03:08:02.875868   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:02.876332   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:08:02.876360   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:08:02.876311   60230 retry.go:31] will retry after 3.777960543s: waiting for domain to come up
	I0211 03:08:06.657489   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:06.658040   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has current primary IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:06.658066   60206 main.go:141] libmachine: (old-k8s-version-244815) found domain IP: 192.168.39.206
	I0211 03:08:06.658086   60206 main.go:141] libmachine: (old-k8s-version-244815) reserving static IP address...
	I0211 03:08:06.658453   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-244815", mac: "52:54:00:5e:6f:f7", ip: "192.168.39.206"} in network mk-old-k8s-version-244815
	I0211 03:08:06.731415   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | Getting to WaitForSSH function...
	I0211 03:08:06.731450   60206 main.go:141] libmachine: (old-k8s-version-244815) reserved static IP address 192.168.39.206 for domain old-k8s-version-244815
	I0211 03:08:06.731536   60206 main.go:141] libmachine: (old-k8s-version-244815) waiting for SSH...
	I0211 03:08:06.733773   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:06.734100   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:06.734148   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:06.734291   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | Using SSH client type: external
	I0211 03:08:06.734314   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | Using SSH private key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa (-rw-------)
	I0211 03:08:06.734358   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0211 03:08:06.734371   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | About to run SSH command:
	I0211 03:08:06.734399   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | exit 0
	I0211 03:08:06.854434   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | SSH cmd err, output: <nil>: 
	I0211 03:08:06.854706   60206 main.go:141] libmachine: (old-k8s-version-244815) KVM machine creation complete
	I0211 03:08:06.855022   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetConfigRaw
	I0211 03:08:06.855519   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:08:06.855693   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:08:06.855908   60206 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0211 03:08:06.855920   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetState
	I0211 03:08:06.857391   60206 main.go:141] libmachine: Detecting operating system of created instance...
	I0211 03:08:06.857407   60206 main.go:141] libmachine: Waiting for SSH to be available...
	I0211 03:08:06.857414   60206 main.go:141] libmachine: Getting to WaitForSSH function...
	I0211 03:08:06.857422   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:08:06.859819   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:06.860203   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:06.860232   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:06.860412   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:08:06.860593   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:06.860715   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:06.860861   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:08:06.861006   60206 main.go:141] libmachine: Using SSH client type: native
	I0211 03:08:06.861183   60206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0211 03:08:06.861201   60206 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0211 03:08:06.957787   60206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 03:08:06.957813   60206 main.go:141] libmachine: Detecting the provisioner...
	I0211 03:08:06.957832   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:08:06.960734   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:06.961086   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:06.961110   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:06.961251   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:08:06.961416   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:06.961564   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:06.961732   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:08:06.961899   60206 main.go:141] libmachine: Using SSH client type: native
	I0211 03:08:06.962057   60206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0211 03:08:06.962067   60206 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0211 03:08:07.058915   60206 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0211 03:08:07.058986   60206 main.go:141] libmachine: found compatible host: buildroot
	I0211 03:08:07.058995   60206 main.go:141] libmachine: Provisioning with buildroot...
	I0211 03:08:07.059005   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetMachineName
	I0211 03:08:07.059219   60206 buildroot.go:166] provisioning hostname "old-k8s-version-244815"
	I0211 03:08:07.059242   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetMachineName
	I0211 03:08:07.059402   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:08:07.061932   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.062254   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.062277   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.062388   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:08:07.062558   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:07.062704   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:07.062842   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:08:07.063011   60206 main.go:141] libmachine: Using SSH client type: native
	I0211 03:08:07.063169   60206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0211 03:08:07.063181   60206 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-244815 && echo "old-k8s-version-244815" | sudo tee /etc/hostname
	I0211 03:08:07.171292   60206 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-244815
	
	I0211 03:08:07.171324   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:08:07.174763   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.175152   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.175183   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.175311   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:08:07.175499   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:07.175663   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:07.175800   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:08:07.175968   60206 main.go:141] libmachine: Using SSH client type: native
	I0211 03:08:07.176152   60206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0211 03:08:07.176169   60206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-244815' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-244815/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-244815' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0211 03:08:07.278314   60206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 03:08:07.278346   60206 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20400-12456/.minikube CaCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20400-12456/.minikube}
	I0211 03:08:07.278381   60206 buildroot.go:174] setting up certificates
	I0211 03:08:07.278397   60206 provision.go:84] configureAuth start
	I0211 03:08:07.278407   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetMachineName
	I0211 03:08:07.278723   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetIP
	I0211 03:08:07.281426   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.281823   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.281851   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.282022   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:08:07.284211   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.284533   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.284569   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.284671   60206 provision.go:143] copyHostCerts
	I0211 03:08:07.284722   60206 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem, removing ...
	I0211 03:08:07.284738   60206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem
	I0211 03:08:07.284788   60206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem (1679 bytes)
	I0211 03:08:07.284864   60206 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem, removing ...
	I0211 03:08:07.284871   60206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem
	I0211 03:08:07.284890   60206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem (1078 bytes)
	I0211 03:08:07.284938   60206 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem, removing ...
	I0211 03:08:07.284945   60206 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem
	I0211 03:08:07.284960   60206 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem (1123 bytes)
	I0211 03:08:07.285001   60206 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-244815 san=[127.0.0.1 192.168.39.206 localhost minikube old-k8s-version-244815]
	I0211 03:08:07.375928   60206 provision.go:177] copyRemoteCerts
	I0211 03:08:07.375988   60206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0211 03:08:07.376010   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:08:07.378796   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.379160   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.379199   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.379359   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:08:07.379515   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:07.379633   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:08:07.379738   60206 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa Username:docker}
	I0211 03:08:07.460530   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0211 03:08:07.483255   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0211 03:08:07.504694   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0211 03:08:07.526289   60206 provision.go:87] duration metric: took 247.879299ms to configureAuth
	I0211 03:08:07.526318   60206 buildroot.go:189] setting minikube options for container-runtime
	I0211 03:08:07.526486   60206 config.go:182] Loaded profile config "old-k8s-version-244815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0211 03:08:07.526561   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:08:07.528993   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.529283   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.529324   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.529429   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:08:07.529585   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:07.529709   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:07.529891   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:08:07.530057   60206 main.go:141] libmachine: Using SSH client type: native
	I0211 03:08:07.530214   60206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0211 03:08:07.530228   60206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0211 03:08:07.744329   60206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0211 03:08:07.744354   60206 main.go:141] libmachine: Checking connection to Docker...
	I0211 03:08:07.744361   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetURL
	I0211 03:08:07.745646   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | using libvirt version 6000000
	I0211 03:08:07.747707   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.748034   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.748063   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.748202   60206 main.go:141] libmachine: Docker is up and running!
	I0211 03:08:07.748214   60206 main.go:141] libmachine: Reticulating splines...
	I0211 03:08:07.748223   60206 client.go:171] duration metric: took 23.83518079s to LocalClient.Create
	I0211 03:08:07.748248   60206 start.go:167] duration metric: took 23.835264325s to libmachine.API.Create "old-k8s-version-244815"
	I0211 03:08:07.748260   60206 start.go:293] postStartSetup for "old-k8s-version-244815" (driver="kvm2")
	I0211 03:08:07.748273   60206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0211 03:08:07.748296   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:08:07.748504   60206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0211 03:08:07.748534   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:08:07.750477   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.750809   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.750837   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.750933   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:08:07.751100   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:07.751243   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:08:07.751380   60206 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa Username:docker}
	I0211 03:08:07.828443   60206 ssh_runner.go:195] Run: cat /etc/os-release
	I0211 03:08:07.832183   60206 info.go:137] Remote host: Buildroot 2023.02.9
	I0211 03:08:07.832203   60206 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/addons for local assets ...
	I0211 03:08:07.832250   60206 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/files for local assets ...
	I0211 03:08:07.832327   60206 filesync.go:149] local asset: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem -> 196452.pem in /etc/ssl/certs
	I0211 03:08:07.832423   60206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0211 03:08:07.841069   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:08:07.862167   60206 start.go:296] duration metric: took 113.896546ms for postStartSetup
	I0211 03:08:07.862210   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetConfigRaw
	I0211 03:08:07.862825   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetIP
	I0211 03:08:07.865400   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.865701   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.865739   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.865919   60206 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/config.json ...
	I0211 03:08:07.866114   60206 start.go:128] duration metric: took 23.972552671s to createHost
	I0211 03:08:07.866141   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:08:07.868106   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.868428   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.868464   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.868606   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:08:07.868768   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:07.868890   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:07.869002   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:08:07.869146   60206 main.go:141] libmachine: Using SSH client type: native
	I0211 03:08:07.869317   60206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0211 03:08:07.869340   60206 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0211 03:08:07.963560   60206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739243287.939211016
	
	I0211 03:08:07.963585   60206 fix.go:216] guest clock: 1739243287.939211016
	I0211 03:08:07.963594   60206 fix.go:229] Guest: 2025-02-11 03:08:07.939211016 +0000 UTC Remote: 2025-02-11 03:08:07.866128612 +0000 UTC m=+24.099729581 (delta=73.082404ms)
	I0211 03:08:07.963638   60206 fix.go:200] guest clock delta is within tolerance: 73.082404ms
	I0211 03:08:07.963646   60206 start.go:83] releasing machines lock for "old-k8s-version-244815", held for 24.070199002s
	I0211 03:08:07.963676   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:08:07.963907   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetIP
	I0211 03:08:07.966739   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.967201   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.967230   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.967453   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:08:07.967936   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:08:07.968134   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:08:07.968200   60206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0211 03:08:07.968255   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:08:07.968430   60206 ssh_runner.go:195] Run: cat /version.json
	I0211 03:08:07.968459   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:08:07.971033   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.971368   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.971493   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.971526   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.971654   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:08:07.971804   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:07.971824   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:07.971825   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:07.971954   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:08:07.972019   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:08:07.972098   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:08:07.972190   60206 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa Username:docker}
	I0211 03:08:07.972242   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:08:07.972361   60206 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa Username:docker}
	I0211 03:08:08.071098   60206 ssh_runner.go:195] Run: systemctl --version
	I0211 03:08:08.076912   60206 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0211 03:08:08.227859   60206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0211 03:08:08.233222   60206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0211 03:08:08.233287   60206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 03:08:08.249178   60206 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0211 03:08:08.249201   60206 start.go:495] detecting cgroup driver to use...
	I0211 03:08:08.249267   60206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0211 03:08:08.266200   60206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0211 03:08:08.279556   60206 docker.go:217] disabling cri-docker service (if available) ...
	I0211 03:08:08.279610   60206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0211 03:08:08.292713   60206 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0211 03:08:08.305577   60206 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0211 03:08:08.422157   60206 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0211 03:08:08.575466   60206 docker.go:233] disabling docker service ...
	I0211 03:08:08.575543   60206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0211 03:08:08.589017   60206 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0211 03:08:08.601101   60206 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0211 03:08:08.738472   60206 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0211 03:08:08.858684   60206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0211 03:08:08.872622   60206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0211 03:08:08.890560   60206 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0211 03:08:08.890632   60206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:08:08.901185   60206 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0211 03:08:08.901269   60206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:08:08.911599   60206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:08:08.922040   60206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:08:08.932226   60206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0211 03:08:08.942535   60206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0211 03:08:08.951649   60206 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0211 03:08:08.951694   60206 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0211 03:08:08.971507   60206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0211 03:08:08.981842   60206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:08:09.106851   60206 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0211 03:08:09.207145   60206 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0211 03:08:09.207210   60206 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0211 03:08:09.213377   60206 start.go:563] Will wait 60s for crictl version
	I0211 03:08:09.213453   60206 ssh_runner.go:195] Run: which crictl
	I0211 03:08:09.217182   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0211 03:08:09.262516   60206 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0211 03:08:09.262602   60206 ssh_runner.go:195] Run: crio --version
	I0211 03:08:09.288775   60206 ssh_runner.go:195] Run: crio --version
	I0211 03:08:09.317909   60206 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0211 03:08:09.319226   60206 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetIP
	I0211 03:08:09.322482   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:09.322910   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:08:09.322942   60206 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:08:09.323207   60206 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0211 03:08:09.327176   60206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 03:08:09.338891   60206 kubeadm.go:883] updating cluster {Name:old-k8s-version-244815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-244815 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0211 03:08:09.339020   60206 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0211 03:08:09.339079   60206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:08:09.373905   60206 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0211 03:08:09.373976   60206 ssh_runner.go:195] Run: which lz4
	I0211 03:08:09.378013   60206 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0211 03:08:09.382299   60206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0211 03:08:09.382340   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0211 03:08:10.892123   60206 crio.go:462] duration metric: took 1.514141663s to copy over tarball
	I0211 03:08:10.892202   60206 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0211 03:08:13.497429   60206 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.60520369s)
	I0211 03:08:13.497455   60206 crio.go:469] duration metric: took 2.605302555s to extract the tarball
	I0211 03:08:13.497462   60206 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0211 03:08:13.539306   60206 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:08:13.581307   60206 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0211 03:08:13.581334   60206 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0211 03:08:13.581401   60206 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:08:13.581452   60206 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:08:13.581462   60206 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:08:13.581484   60206 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0211 03:08:13.581501   60206 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0211 03:08:13.581403   60206 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:08:13.581571   60206 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:08:13.581466   60206 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0211 03:08:13.582892   60206 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0211 03:08:13.582916   60206 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:08:13.582922   60206 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0211 03:08:13.582924   60206 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0211 03:08:13.582994   60206 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:08:13.582994   60206 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:08:13.583168   60206 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:08:13.583417   60206 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:08:13.729242   60206 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:08:13.732655   60206 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:08:13.733392   60206 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:08:13.742696   60206 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:08:13.757672   60206 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0211 03:08:13.764440   60206 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0211 03:08:13.766972   60206 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0211 03:08:13.853718   60206 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0211 03:08:13.853769   60206 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:08:13.853826   60206 ssh_runner.go:195] Run: which crictl
	I0211 03:08:13.877765   60206 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0211 03:08:13.877803   60206 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0211 03:08:13.877821   60206 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:08:13.877834   60206 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:08:13.877881   60206 ssh_runner.go:195] Run: which crictl
	I0211 03:08:13.877881   60206 ssh_runner.go:195] Run: which crictl
	I0211 03:08:13.903152   60206 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0211 03:08:13.903173   60206 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0211 03:08:13.903206   60206 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0211 03:08:13.903206   60206 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:08:13.903254   60206 ssh_runner.go:195] Run: which crictl
	I0211 03:08:13.903254   60206 ssh_runner.go:195] Run: which crictl
	I0211 03:08:13.912436   60206 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0211 03:08:13.912450   60206 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0211 03:08:13.912477   60206 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0211 03:08:13.912495   60206 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0211 03:08:13.912527   60206 ssh_runner.go:195] Run: which crictl
	I0211 03:08:13.912526   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:08:13.912535   60206 ssh_runner.go:195] Run: which crictl
	I0211 03:08:13.912552   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:08:13.912600   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0211 03:08:13.912605   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:08:13.912627   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:08:13.928149   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0211 03:08:13.928159   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0211 03:08:14.011736   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:08:14.057512   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:08:14.057587   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:08:14.084993   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0211 03:08:14.084993   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:08:14.085068   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0211 03:08:14.140278   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0211 03:08:14.140321   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:08:14.215057   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:08:14.215143   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:08:14.224576   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0211 03:08:14.224609   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0211 03:08:14.224638   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:08:14.256422   60206 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0211 03:08:14.288522   60206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0211 03:08:14.332251   60206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0211 03:08:14.359917   60206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0211 03:08:14.369539   60206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0211 03:08:14.369547   60206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0211 03:08:14.376705   60206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0211 03:08:14.385806   60206 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0211 03:08:14.512338   60206 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:08:14.651938   60206 cache_images.go:92] duration metric: took 1.070586497s to LoadCachedImages
	W0211 03:08:14.652102   60206 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0211 03:08:14.652128   60206 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.20.0 crio true true} ...
	I0211 03:08:14.652245   60206 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-244815 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-244815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0211 03:08:14.652332   60206 ssh_runner.go:195] Run: crio config
	I0211 03:08:14.701069   60206 cni.go:84] Creating CNI manager for ""
	I0211 03:08:14.701104   60206 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 03:08:14.701115   60206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 03:08:14.701140   60206 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-244815 NodeName:old-k8s-version-244815 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0211 03:08:14.701277   60206 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-244815"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 03:08:14.701334   60206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0211 03:08:14.710760   60206 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 03:08:14.710832   60206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 03:08:14.719663   60206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0211 03:08:14.735976   60206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 03:08:14.751761   60206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0211 03:08:14.767612   60206 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I0211 03:08:14.771100   60206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 03:08:14.782157   60206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:08:14.891084   60206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:08:14.908068   60206 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815 for IP: 192.168.39.206
	I0211 03:08:14.908087   60206 certs.go:194] generating shared ca certs ...
	I0211 03:08:14.908103   60206 certs.go:226] acquiring lock for ca certs: {Name:mk14e70e4f3b98aff6eac535114852cc1d70eb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:08:14.908239   60206 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key
	I0211 03:08:14.908310   60206 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key
	I0211 03:08:14.908327   60206 certs.go:256] generating profile certs ...
	I0211 03:08:14.908397   60206 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/client.key
	I0211 03:08:14.908415   60206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/client.crt with IP's: []
	I0211 03:08:15.238501   60206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/client.crt ...
	I0211 03:08:15.238529   60206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/client.crt: {Name:mkf907a3ce1d7dab3d96caf9173af06977fd3fa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:08:15.238726   60206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/client.key ...
	I0211 03:08:15.238745   60206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/client.key: {Name:mkbc15cdd29653c583dc0cc126a1f659c6732e1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:08:15.238860   60206 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.key.717128d0
	I0211 03:08:15.238899   60206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.crt.717128d0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206]
	I0211 03:08:15.367725   60206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.crt.717128d0 ...
	I0211 03:08:15.367755   60206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.crt.717128d0: {Name:mkfaceeb91905cbadb702224f6d8f57762ba9eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:08:15.367934   60206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.key.717128d0 ...
	I0211 03:08:15.367957   60206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.key.717128d0: {Name:mk1394f7773bf6b8c0ce1f747cc37bfe30d6c072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:08:15.368058   60206 certs.go:381] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.crt.717128d0 -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.crt
	I0211 03:08:15.368164   60206 certs.go:385] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.key.717128d0 -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.key
	I0211 03:08:15.368250   60206 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/proxy-client.key
	I0211 03:08:15.368272   60206 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/proxy-client.crt with IP's: []
	I0211 03:08:15.605524   60206 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/proxy-client.crt ...
	I0211 03:08:15.605556   60206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/proxy-client.crt: {Name:mk1c95f7745ad6ba80a10d38d83c27aa6a7b9ddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:08:15.605747   60206 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/proxy-client.key ...
	I0211 03:08:15.605765   60206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/proxy-client.key: {Name:mk3b90768531416dd03a8f32d17882bdc44b2af5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:08:15.606024   60206 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem (1338 bytes)
	W0211 03:08:15.606063   60206 certs.go:480] ignoring /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645_empty.pem, impossibly tiny 0 bytes
	I0211 03:08:15.606074   60206 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem (1675 bytes)
	I0211 03:08:15.606093   60206 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem (1078 bytes)
	I0211 03:08:15.606121   60206 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem (1123 bytes)
	I0211 03:08:15.606142   60206 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem (1679 bytes)
	I0211 03:08:15.606180   60206 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:08:15.606731   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 03:08:15.630974   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 03:08:15.652627   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 03:08:15.675555   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0211 03:08:15.697246   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0211 03:08:15.721214   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0211 03:08:15.742609   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 03:08:15.769666   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0211 03:08:15.802936   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /usr/share/ca-certificates/196452.pem (1708 bytes)
	I0211 03:08:15.828183   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 03:08:15.849190   60206 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem --> /usr/share/ca-certificates/19645.pem (1338 bytes)
	I0211 03:08:15.870151   60206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 03:08:15.885036   60206 ssh_runner.go:195] Run: openssl version
	I0211 03:08:15.890349   60206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/196452.pem && ln -fs /usr/share/ca-certificates/196452.pem /etc/ssl/certs/196452.pem"
	I0211 03:08:15.899791   60206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/196452.pem
	I0211 03:08:15.903824   60206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 11 02:09 /usr/share/ca-certificates/196452.pem
	I0211 03:08:15.903881   60206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/196452.pem
	I0211 03:08:15.909189   60206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/196452.pem /etc/ssl/certs/3ec20f2e.0"
	I0211 03:08:15.918931   60206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 03:08:15.929304   60206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:08:15.933690   60206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:08:15.933734   60206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:08:15.939101   60206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 03:08:15.949247   60206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19645.pem && ln -fs /usr/share/ca-certificates/19645.pem /etc/ssl/certs/19645.pem"
	I0211 03:08:15.959750   60206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19645.pem
	I0211 03:08:15.964016   60206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 11 02:09 /usr/share/ca-certificates/19645.pem
	I0211 03:08:15.964072   60206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19645.pem
	I0211 03:08:15.969632   60206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19645.pem /etc/ssl/certs/51391683.0"
	I0211 03:08:15.980529   60206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 03:08:15.984753   60206 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0211 03:08:15.984810   60206 kubeadm.go:392] StartCluster: {Name:old-k8s-version-244815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-244815 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:08:15.984899   60206 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 03:08:15.984951   60206 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 03:08:16.024943   60206 cri.go:89] found id: ""
	I0211 03:08:16.025003   60206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0211 03:08:16.034934   60206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 03:08:16.044283   60206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 03:08:16.054035   60206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 03:08:16.054060   60206 kubeadm.go:157] found existing configuration files:
	
	I0211 03:08:16.054121   60206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 03:08:16.064967   60206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 03:08:16.065021   60206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 03:08:16.073543   60206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 03:08:16.081678   60206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 03:08:16.081733   60206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 03:08:16.090111   60206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 03:08:16.098058   60206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 03:08:16.098110   60206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 03:08:16.106299   60206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 03:08:16.114234   60206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 03:08:16.114287   60206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 03:08:16.122272   60206 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0211 03:08:16.374359   60206 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 03:10:14.092931   60206 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0211 03:10:14.093032   60206 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0211 03:10:14.094533   60206 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0211 03:10:14.094647   60206 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 03:10:14.094759   60206 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 03:10:14.094915   60206 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 03:10:14.095061   60206 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0211 03:10:14.095163   60206 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 03:10:14.097197   60206 out.go:235]   - Generating certificates and keys ...
	I0211 03:10:14.097296   60206 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 03:10:14.097400   60206 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 03:10:14.097502   60206 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0211 03:10:14.097588   60206 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0211 03:10:14.097673   60206 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0211 03:10:14.097732   60206 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0211 03:10:14.097814   60206 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0211 03:10:14.097967   60206 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-244815] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0211 03:10:14.098042   60206 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0211 03:10:14.098245   60206 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-244815] and IPs [192.168.39.206 127.0.0.1 ::1]
	I0211 03:10:14.098344   60206 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0211 03:10:14.098441   60206 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0211 03:10:14.098514   60206 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0211 03:10:14.098595   60206 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 03:10:14.098643   60206 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 03:10:14.098689   60206 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 03:10:14.098758   60206 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 03:10:14.098805   60206 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 03:10:14.098936   60206 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 03:10:14.099075   60206 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 03:10:14.099144   60206 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 03:10:14.099236   60206 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 03:10:14.100728   60206 out.go:235]   - Booting up control plane ...
	I0211 03:10:14.100821   60206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 03:10:14.100926   60206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 03:10:14.101028   60206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 03:10:14.101147   60206 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 03:10:14.101376   60206 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0211 03:10:14.101452   60206 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0211 03:10:14.101557   60206 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:10:14.101754   60206 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:10:14.101860   60206 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:10:14.102031   60206 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:10:14.102125   60206 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:10:14.102328   60206 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:10:14.102437   60206 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:10:14.102676   60206 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:10:14.102768   60206 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:10:14.103009   60206 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:10:14.103026   60206 kubeadm.go:310] 
	I0211 03:10:14.103086   60206 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0211 03:10:14.103150   60206 kubeadm.go:310] 		timed out waiting for the condition
	I0211 03:10:14.103159   60206 kubeadm.go:310] 
	I0211 03:10:14.103214   60206 kubeadm.go:310] 	This error is likely caused by:
	I0211 03:10:14.103262   60206 kubeadm.go:310] 		- The kubelet is not running
	I0211 03:10:14.103422   60206 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0211 03:10:14.103444   60206 kubeadm.go:310] 
	I0211 03:10:14.103568   60206 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0211 03:10:14.103617   60206 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0211 03:10:14.103671   60206 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0211 03:10:14.103680   60206 kubeadm.go:310] 
	I0211 03:10:14.103803   60206 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0211 03:10:14.103921   60206 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0211 03:10:14.103935   60206 kubeadm.go:310] 
	I0211 03:10:14.104027   60206 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0211 03:10:14.104106   60206 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0211 03:10:14.104192   60206 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0211 03:10:14.104278   60206 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0211 03:10:14.104315   60206 kubeadm.go:310] 
	W0211 03:10:14.104395   60206 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-244815] and IPs [192.168.39.206 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-244815] and IPs [192.168.39.206 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-244815] and IPs [192.168.39.206 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-244815] and IPs [192.168.39.206 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0211 03:10:14.104434   60206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0211 03:10:19.446320   60206 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.341860105s)
	I0211 03:10:19.446404   60206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 03:10:19.460190   60206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 03:10:19.469243   60206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 03:10:19.469266   60206 kubeadm.go:157] found existing configuration files:
	
	I0211 03:10:19.469317   60206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 03:10:19.478394   60206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 03:10:19.478443   60206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 03:10:19.487183   60206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 03:10:19.495288   60206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 03:10:19.495344   60206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 03:10:19.504506   60206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 03:10:19.513123   60206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 03:10:19.513178   60206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 03:10:19.521849   60206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 03:10:19.529691   60206 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 03:10:19.529737   60206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 03:10:19.538252   60206 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0211 03:10:19.744438   60206 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 03:12:15.818754   60206 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0211 03:12:15.819015   60206 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0211 03:12:15.820839   60206 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0211 03:12:15.820902   60206 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 03:12:15.821005   60206 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 03:12:15.821193   60206 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 03:12:15.821319   60206 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0211 03:12:15.821448   60206 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 03:12:15.823059   60206 out.go:235]   - Generating certificates and keys ...
	I0211 03:12:15.823169   60206 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 03:12:15.823279   60206 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 03:12:15.823397   60206 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0211 03:12:15.823496   60206 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0211 03:12:15.823596   60206 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0211 03:12:15.823691   60206 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0211 03:12:15.823796   60206 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0211 03:12:15.823883   60206 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0211 03:12:15.824009   60206 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0211 03:12:15.824112   60206 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0211 03:12:15.824168   60206 kubeadm.go:310] [certs] Using the existing "sa" key
	I0211 03:12:15.824246   60206 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 03:12:15.824318   60206 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 03:12:15.824387   60206 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 03:12:15.824458   60206 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 03:12:15.824528   60206 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 03:12:15.824661   60206 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 03:12:15.824798   60206 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 03:12:15.824872   60206 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 03:12:15.824970   60206 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 03:12:15.826183   60206 out.go:235]   - Booting up control plane ...
	I0211 03:12:15.826253   60206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 03:12:15.826314   60206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 03:12:15.826391   60206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 03:12:15.826512   60206 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 03:12:15.826732   60206 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0211 03:12:15.826800   60206 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0211 03:12:15.826938   60206 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:12:15.827203   60206 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:12:15.827298   60206 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:12:15.827538   60206 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:12:15.827620   60206 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:12:15.827823   60206 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:12:15.827934   60206 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:12:15.828204   60206 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:12:15.828289   60206 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:12:15.828466   60206 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:12:15.828476   60206 kubeadm.go:310] 
	I0211 03:12:15.828531   60206 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0211 03:12:15.828586   60206 kubeadm.go:310] 		timed out waiting for the condition
	I0211 03:12:15.828601   60206 kubeadm.go:310] 
	I0211 03:12:15.828654   60206 kubeadm.go:310] 	This error is likely caused by:
	I0211 03:12:15.828702   60206 kubeadm.go:310] 		- The kubelet is not running
	I0211 03:12:15.828859   60206 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0211 03:12:15.828876   60206 kubeadm.go:310] 
	I0211 03:12:15.829027   60206 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0211 03:12:15.829080   60206 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0211 03:12:15.829128   60206 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0211 03:12:15.829143   60206 kubeadm.go:310] 
	I0211 03:12:15.829294   60206 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0211 03:12:15.829426   60206 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0211 03:12:15.829438   60206 kubeadm.go:310] 
	I0211 03:12:15.829571   60206 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0211 03:12:15.829700   60206 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0211 03:12:15.829803   60206 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0211 03:12:15.829887   60206 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0211 03:12:15.829919   60206 kubeadm.go:310] 
	I0211 03:12:15.829945   60206 kubeadm.go:394] duration metric: took 3m59.84513826s to StartCluster
	I0211 03:12:15.829988   60206 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:12:15.830036   60206 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:12:15.876003   60206 cri.go:89] found id: ""
	I0211 03:12:15.876028   60206 logs.go:282] 0 containers: []
	W0211 03:12:15.876039   60206 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:12:15.876046   60206 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:12:15.876110   60206 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:12:15.909740   60206 cri.go:89] found id: ""
	I0211 03:12:15.909769   60206 logs.go:282] 0 containers: []
	W0211 03:12:15.909782   60206 logs.go:284] No container was found matching "etcd"
	I0211 03:12:15.909791   60206 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:12:15.909876   60206 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:12:15.942687   60206 cri.go:89] found id: ""
	I0211 03:12:15.942717   60206 logs.go:282] 0 containers: []
	W0211 03:12:15.942728   60206 logs.go:284] No container was found matching "coredns"
	I0211 03:12:15.942735   60206 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:12:15.942805   60206 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:12:15.983221   60206 cri.go:89] found id: ""
	I0211 03:12:15.983247   60206 logs.go:282] 0 containers: []
	W0211 03:12:15.983257   60206 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:12:15.983264   60206 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:12:15.983323   60206 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:12:16.016097   60206 cri.go:89] found id: ""
	I0211 03:12:16.016130   60206 logs.go:282] 0 containers: []
	W0211 03:12:16.016143   60206 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:12:16.016152   60206 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:12:16.016213   60206 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:12:16.049017   60206 cri.go:89] found id: ""
	I0211 03:12:16.049051   60206 logs.go:282] 0 containers: []
	W0211 03:12:16.049064   60206 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:12:16.049073   60206 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:12:16.049135   60206 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:12:16.082424   60206 cri.go:89] found id: ""
	I0211 03:12:16.082457   60206 logs.go:282] 0 containers: []
	W0211 03:12:16.082472   60206 logs.go:284] No container was found matching "kindnet"
	I0211 03:12:16.082485   60206 logs.go:123] Gathering logs for kubelet ...
	I0211 03:12:16.082499   60206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:12:16.132502   60206 logs.go:123] Gathering logs for dmesg ...
	I0211 03:12:16.132537   60206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:12:16.147941   60206 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:12:16.147977   60206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:12:16.267533   60206 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:12:16.267559   60206 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:12:16.267574   60206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:12:16.388931   60206 logs.go:123] Gathering logs for container status ...
	I0211 03:12:16.388970   60206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0211 03:12:16.431142   60206 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0211 03:12:16.431223   60206 out.go:270] * 
	* 
	W0211 03:12:16.431282   60206 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0211 03:12:16.431299   60206 out.go:270] * 
	* 
	W0211 03:12:16.432466   60206 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0211 03:12:16.435876   60206 out.go:201] 
	W0211 03:12:16.436937   60206 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0211 03:12:16.436980   60206 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0211 03:12:16.437003   60206 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0211 03:12:16.438395   60206 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-244815 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 6 (250.565287ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0211 03:12:16.717762   62822 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-244815" does not appear in /home/jenkins/minikube-integration/20400-12456/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-244815" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (272.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-244815 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context old-k8s-version-244815 create -f testdata/busybox.yaml: exit status 1 (48.220119ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-244815" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context old-k8s-version-244815 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 6 (231.685903ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0211 03:12:17.003571   62861 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-244815" does not appear in /home/jenkins/minikube-integration/20400-12456/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-244815" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 6 (238.582858ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0211 03:12:17.244607   62891 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-244815" does not appear in /home/jenkins/minikube-integration/20400-12456/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-244815" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (101.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-244815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0211 03:12:23.755137   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-244815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m40.936016601s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-244815 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-244815 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-244815 describe deploy/metrics-server -n kube-system: exit status 1 (42.182476ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-244815" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-244815 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 6 (212.06354ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0211 03:13:58.434057   63813 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-244815" does not appear in /home/jenkins/minikube-integration/20400-12456/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-244815" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (101.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (508.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-244815 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0211 03:14:16.210830   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-244815 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (8m26.168818362s)

                                                
                                                
-- stdout --
	* [old-k8s-version-244815] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-244815" primary control-plane node in "old-k8s-version-244815" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-244815" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 03:14:00.952684   63944 out.go:345] Setting OutFile to fd 1 ...
	I0211 03:14:00.953124   63944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:14:00.953138   63944 out.go:358] Setting ErrFile to fd 2...
	I0211 03:14:00.953145   63944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:14:00.953379   63944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 03:14:00.953924   63944 out.go:352] Setting JSON to false
	I0211 03:14:00.954899   63944 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6992,"bootTime":1739236649,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 03:14:00.954995   63944 start.go:139] virtualization: kvm guest
	I0211 03:14:00.956891   63944 out.go:177] * [old-k8s-version-244815] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 03:14:00.958269   63944 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 03:14:00.958328   63944 notify.go:220] Checking for updates...
	I0211 03:14:00.960544   63944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 03:14:00.961739   63944 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:14:00.962981   63944 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:14:00.964199   63944 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 03:14:00.965288   63944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 03:14:00.966722   63944 config.go:182] Loaded profile config "old-k8s-version-244815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0211 03:14:00.967120   63944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:14:00.967158   63944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:14:00.982677   63944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44489
	I0211 03:14:00.983090   63944 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:14:00.983666   63944 main.go:141] libmachine: Using API Version  1
	I0211 03:14:00.983692   63944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:14:00.984133   63944 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:14:00.984359   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:14:00.986226   63944 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
	I0211 03:14:00.987422   63944 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 03:14:00.987848   63944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:14:00.987893   63944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:14:01.002411   63944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40587
	I0211 03:14:01.002736   63944 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:14:01.003160   63944 main.go:141] libmachine: Using API Version  1
	I0211 03:14:01.003182   63944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:14:01.003473   63944 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:14:01.003658   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:14:01.039335   63944 out.go:177] * Using the kvm2 driver based on existing profile
	I0211 03:14:01.040540   63944 start.go:297] selected driver: kvm2
	I0211 03:14:01.040553   63944 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-244815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-2
44815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:14:01.040654   63944 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 03:14:01.041272   63944 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:14:01.041336   63944 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20400-12456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0211 03:14:01.055335   63944 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0211 03:14:01.055676   63944 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:14:01.055703   63944 cni.go:84] Creating CNI manager for ""
	I0211 03:14:01.055769   63944 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 03:14:01.055807   63944 start.go:340] cluster config:
	{Name:old-k8s-version-244815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-244815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:14:01.055887   63944 iso.go:125] acquiring lock: {Name:mkf866c6e52b4efa55cc59a9f329105471716f9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:14:01.058018   63944 out.go:177] * Starting "old-k8s-version-244815" primary control-plane node in "old-k8s-version-244815" cluster
	I0211 03:14:01.059060   63944 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0211 03:14:01.059089   63944 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0211 03:14:01.059101   63944 cache.go:56] Caching tarball of preloaded images
	I0211 03:14:01.059167   63944 preload.go:172] Found /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0211 03:14:01.059178   63944 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0211 03:14:01.059266   63944 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/config.json ...
	I0211 03:14:01.059427   63944 start.go:360] acquireMachinesLock for old-k8s-version-244815: {Name:mk0cbf79bfabdb28d0a301765db34c154a72eff0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0211 03:14:01.059475   63944 start.go:364] duration metric: took 30.95µs to acquireMachinesLock for "old-k8s-version-244815"
	I0211 03:14:01.059491   63944 start.go:96] Skipping create...Using existing machine configuration
	I0211 03:14:01.059498   63944 fix.go:54] fixHost starting: 
	I0211 03:14:01.059755   63944 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:14:01.059787   63944 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:14:01.073659   63944 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34813
	I0211 03:14:01.074142   63944 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:14:01.074711   63944 main.go:141] libmachine: Using API Version  1
	I0211 03:14:01.074739   63944 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:14:01.075080   63944 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:14:01.075295   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:14:01.075461   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetState
	I0211 03:14:01.077046   63944 fix.go:112] recreateIfNeeded on old-k8s-version-244815: state=Stopped err=<nil>
	I0211 03:14:01.077073   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	W0211 03:14:01.077219   63944 fix.go:138] unexpected machine state, will restart: <nil>
	I0211 03:14:01.079006   63944 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-244815" ...
	I0211 03:14:01.080224   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .Start
	I0211 03:14:01.080372   63944 main.go:141] libmachine: (old-k8s-version-244815) starting domain...
	I0211 03:14:01.080392   63944 main.go:141] libmachine: (old-k8s-version-244815) ensuring networks are active...
	I0211 03:14:01.081102   63944 main.go:141] libmachine: (old-k8s-version-244815) Ensuring network default is active
	I0211 03:14:01.081480   63944 main.go:141] libmachine: (old-k8s-version-244815) Ensuring network mk-old-k8s-version-244815 is active
	I0211 03:14:01.081926   63944 main.go:141] libmachine: (old-k8s-version-244815) getting domain XML...
	I0211 03:14:01.082771   63944 main.go:141] libmachine: (old-k8s-version-244815) creating domain...
	I0211 03:14:02.380911   63944 main.go:141] libmachine: (old-k8s-version-244815) waiting for IP...
	I0211 03:14:02.381776   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:02.382228   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:02.382365   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:02.382232   63979 retry.go:31] will retry after 286.433255ms: waiting for domain to come up
	I0211 03:14:02.671387   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:02.671938   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:02.671967   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:02.671902   63979 retry.go:31] will retry after 290.390141ms: waiting for domain to come up
	I0211 03:14:02.964263   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:02.964790   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:02.964861   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:02.964752   63979 retry.go:31] will retry after 334.741347ms: waiting for domain to come up
	I0211 03:14:03.301146   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:03.301837   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:03.301870   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:03.301809   63979 retry.go:31] will retry after 500.022522ms: waiting for domain to come up
	I0211 03:14:03.803841   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:03.804481   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:03.804512   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:03.804431   63979 retry.go:31] will retry after 580.428776ms: waiting for domain to come up
	I0211 03:14:04.387199   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:04.387784   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:04.387816   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:04.387766   63979 retry.go:31] will retry after 707.085798ms: waiting for domain to come up
	I0211 03:14:05.096185   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:05.096750   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:05.096807   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:05.096719   63979 retry.go:31] will retry after 1.187739959s: waiting for domain to come up
	I0211 03:14:06.286592   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:06.287165   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:06.287194   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:06.287134   63979 retry.go:31] will retry after 1.185604935s: waiting for domain to come up
	I0211 03:14:07.474623   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:07.475152   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:07.475184   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:07.475130   63979 retry.go:31] will retry after 1.142746966s: waiting for domain to come up
	I0211 03:14:08.619476   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:08.619988   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:08.620014   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:08.619952   63979 retry.go:31] will retry after 2.271360106s: waiting for domain to come up
	I0211 03:14:10.892681   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:10.893166   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:10.893209   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:10.893170   63979 retry.go:31] will retry after 2.698301331s: waiting for domain to come up
	I0211 03:14:13.593382   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:13.593863   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:13.593952   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:13.593859   63979 retry.go:31] will retry after 3.606296558s: waiting for domain to come up
	I0211 03:14:17.305812   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:17.306258   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | unable to find current IP address of domain old-k8s-version-244815 in network mk-old-k8s-version-244815
	I0211 03:14:17.306291   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | I0211 03:14:17.306212   63979 retry.go:31] will retry after 2.799294732s: waiting for domain to come up
	I0211 03:14:20.109119   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.109618   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has current primary IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.109635   63944 main.go:141] libmachine: (old-k8s-version-244815) found domain IP: 192.168.39.206
	I0211 03:14:20.109642   63944 main.go:141] libmachine: (old-k8s-version-244815) reserving static IP address...
	I0211 03:14:20.110063   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "old-k8s-version-244815", mac: "52:54:00:5e:6f:f7", ip: "192.168.39.206"} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:20.110084   63944 main.go:141] libmachine: (old-k8s-version-244815) reserved static IP address 192.168.39.206 for domain old-k8s-version-244815
	I0211 03:14:20.110099   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | skip adding static IP to network mk-old-k8s-version-244815 - found existing host DHCP lease matching {name: "old-k8s-version-244815", mac: "52:54:00:5e:6f:f7", ip: "192.168.39.206"}
	I0211 03:14:20.110110   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | Getting to WaitForSSH function...
	I0211 03:14:20.110122   63944 main.go:141] libmachine: (old-k8s-version-244815) waiting for SSH...
	I0211 03:14:20.112118   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.112400   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:20.112424   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.112542   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | Using SSH client type: external
	I0211 03:14:20.112569   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | Using SSH private key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa (-rw-------)
	I0211 03:14:20.112630   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0211 03:14:20.112661   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | About to run SSH command:
	I0211 03:14:20.112690   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | exit 0
	I0211 03:14:20.230765   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | SSH cmd err, output: <nil>: 
	I0211 03:14:20.231178   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetConfigRaw
	I0211 03:14:20.231863   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetIP
	I0211 03:14:20.234573   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.234966   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:20.235001   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.235234   63944 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/config.json ...
	I0211 03:14:20.235427   63944 machine.go:93] provisionDockerMachine start ...
	I0211 03:14:20.235444   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:14:20.235629   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:14:20.237947   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.238278   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:20.238305   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.238437   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:14:20.238606   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:20.238768   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:20.238913   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:14:20.239069   63944 main.go:141] libmachine: Using SSH client type: native
	I0211 03:14:20.239296   63944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0211 03:14:20.239307   63944 main.go:141] libmachine: About to run SSH command:
	hostname
	I0211 03:14:20.338902   63944 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0211 03:14:20.338935   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetMachineName
	I0211 03:14:20.339176   63944 buildroot.go:166] provisioning hostname "old-k8s-version-244815"
	I0211 03:14:20.339221   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetMachineName
	I0211 03:14:20.339407   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:14:20.342021   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.342409   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:20.342441   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.342599   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:14:20.342789   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:20.343002   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:20.343155   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:14:20.343310   63944 main.go:141] libmachine: Using SSH client type: native
	I0211 03:14:20.343496   63944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0211 03:14:20.343511   63944 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-244815 && echo "old-k8s-version-244815" | sudo tee /etc/hostname
	I0211 03:14:20.458742   63944 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-244815
	
	I0211 03:14:20.458770   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:14:20.461690   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.462067   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:20.462118   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.462272   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:14:20.462489   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:20.462642   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:20.462836   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:14:20.463035   63944 main.go:141] libmachine: Using SSH client type: native
	I0211 03:14:20.463281   63944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0211 03:14:20.463304   63944 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-244815' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-244815/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-244815' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0211 03:14:20.571140   63944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 03:14:20.571173   63944 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20400-12456/.minikube CaCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20400-12456/.minikube}
	I0211 03:14:20.571238   63944 buildroot.go:174] setting up certificates
	I0211 03:14:20.571252   63944 provision.go:84] configureAuth start
	I0211 03:14:20.571271   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetMachineName
	I0211 03:14:20.571546   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetIP
	I0211 03:14:20.574018   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.574383   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:20.574408   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.574576   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:14:20.576727   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.577016   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:20.577053   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.577182   63944 provision.go:143] copyHostCerts
	I0211 03:14:20.577232   63944 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem, removing ...
	I0211 03:14:20.577253   63944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem
	I0211 03:14:20.577326   63944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem (1078 bytes)
	I0211 03:14:20.577439   63944 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem, removing ...
	I0211 03:14:20.577450   63944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem
	I0211 03:14:20.577481   63944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem (1123 bytes)
	I0211 03:14:20.577574   63944 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem, removing ...
	I0211 03:14:20.577583   63944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem
	I0211 03:14:20.577612   63944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem (1679 bytes)
	I0211 03:14:20.577678   63944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-244815 san=[127.0.0.1 192.168.39.206 localhost minikube old-k8s-version-244815]
	I0211 03:14:20.671798   63944 provision.go:177] copyRemoteCerts
	I0211 03:14:20.671868   63944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0211 03:14:20.671898   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:14:20.675017   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.675431   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:20.675470   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.675633   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:14:20.675801   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:20.675944   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:14:20.676070   63944 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa Username:docker}
	I0211 03:14:20.754094   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0211 03:14:20.776422   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0211 03:14:20.803504   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0211 03:14:20.830012   63944 provision.go:87] duration metric: took 258.744572ms to configureAuth
	I0211 03:14:20.830044   63944 buildroot.go:189] setting minikube options for container-runtime
	I0211 03:14:20.830288   63944 config.go:182] Loaded profile config "old-k8s-version-244815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0211 03:14:20.830383   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:14:20.833485   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.833892   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:20.833924   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:20.834111   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:14:20.834297   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:20.834443   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:20.834599   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:14:20.834736   63944 main.go:141] libmachine: Using SSH client type: native
	I0211 03:14:20.834931   63944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0211 03:14:20.834955   63944 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0211 03:14:21.046104   63944 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0211 03:14:21.046139   63944 machine.go:96] duration metric: took 810.698996ms to provisionDockerMachine
	I0211 03:14:21.046156   63944 start.go:293] postStartSetup for "old-k8s-version-244815" (driver="kvm2")
	I0211 03:14:21.046171   63944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0211 03:14:21.046200   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:14:21.046521   63944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0211 03:14:21.046554   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:14:21.049127   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:21.049479   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:21.049510   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:21.049664   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:14:21.049825   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:21.050000   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:14:21.050119   63944 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa Username:docker}
	I0211 03:14:21.129192   63944 ssh_runner.go:195] Run: cat /etc/os-release
	I0211 03:14:21.133025   63944 info.go:137] Remote host: Buildroot 2023.02.9
	I0211 03:14:21.133053   63944 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/addons for local assets ...
	I0211 03:14:21.133123   63944 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/files for local assets ...
	I0211 03:14:21.133221   63944 filesync.go:149] local asset: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem -> 196452.pem in /etc/ssl/certs
	I0211 03:14:21.133351   63944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0211 03:14:21.141942   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:14:21.163834   63944 start.go:296] duration metric: took 117.66445ms for postStartSetup
	I0211 03:14:21.163868   63944 fix.go:56] duration metric: took 20.104369648s for fixHost
	I0211 03:14:21.163886   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:14:21.166185   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:21.166497   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:21.166528   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:21.166653   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:14:21.166827   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:21.166989   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:21.167121   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:14:21.167255   63944 main.go:141] libmachine: Using SSH client type: native
	I0211 03:14:21.167443   63944 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I0211 03:14:21.167454   63944 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0211 03:14:21.271826   63944 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739243661.245462243
	
	I0211 03:14:21.271851   63944 fix.go:216] guest clock: 1739243661.245462243
	I0211 03:14:21.271858   63944 fix.go:229] Guest: 2025-02-11 03:14:21.245462243 +0000 UTC Remote: 2025-02-11 03:14:21.16387145 +0000 UTC m=+20.247301142 (delta=81.590793ms)
	I0211 03:14:21.271897   63944 fix.go:200] guest clock delta is within tolerance: 81.590793ms
	I0211 03:14:21.271904   63944 start.go:83] releasing machines lock for "old-k8s-version-244815", held for 20.212418032s
	I0211 03:14:21.271934   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:14:21.272204   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetIP
	I0211 03:14:21.274852   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:21.275231   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:21.275255   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:21.275460   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:14:21.275914   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:14:21.276104   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .DriverName
	I0211 03:14:21.276198   63944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0211 03:14:21.276248   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:14:21.276307   63944 ssh_runner.go:195] Run: cat /version.json
	I0211 03:14:21.276331   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHHostname
	I0211 03:14:21.278925   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:21.279129   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:21.279275   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:21.279303   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:21.279498   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:14:21.279561   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:21.279582   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:21.279669   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:21.279750   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHPort
	I0211 03:14:21.279789   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:14:21.279923   63944 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa Username:docker}
	I0211 03:14:21.279970   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHKeyPath
	I0211 03:14:21.280116   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetSSHUsername
	I0211 03:14:21.280291   63944 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/old-k8s-version-244815/id_rsa Username:docker}
	I0211 03:14:21.379351   63944 ssh_runner.go:195] Run: systemctl --version
	I0211 03:14:21.385345   63944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0211 03:14:21.536070   63944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0211 03:14:21.542680   63944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0211 03:14:21.542744   63944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 03:14:21.559057   63944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0211 03:14:21.559083   63944 start.go:495] detecting cgroup driver to use...
	I0211 03:14:21.559155   63944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0211 03:14:21.574488   63944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0211 03:14:21.589816   63944 docker.go:217] disabling cri-docker service (if available) ...
	I0211 03:14:21.589883   63944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0211 03:14:21.605179   63944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0211 03:14:21.620169   63944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0211 03:14:21.749923   63944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0211 03:14:21.898700   63944 docker.go:233] disabling docker service ...
	I0211 03:14:21.898789   63944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0211 03:14:21.920206   63944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0211 03:14:21.939023   63944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0211 03:14:22.120375   63944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0211 03:14:22.253486   63944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0211 03:14:22.269949   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0211 03:14:22.292213   63944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0211 03:14:22.292285   63944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:14:22.305033   63944 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0211 03:14:22.305102   63944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:14:22.319647   63944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:14:22.332965   63944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:14:22.346258   63944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0211 03:14:22.359728   63944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0211 03:14:22.371442   63944 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0211 03:14:22.371512   63944 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0211 03:14:22.388055   63944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0211 03:14:22.400412   63944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:14:22.564250   63944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0211 03:14:22.689388   63944 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0211 03:14:22.689473   63944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0211 03:14:22.695110   63944 start.go:563] Will wait 60s for crictl version
	I0211 03:14:22.695184   63944 ssh_runner.go:195] Run: which crictl
	I0211 03:14:22.700677   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0211 03:14:22.747336   63944 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0211 03:14:22.747427   63944 ssh_runner.go:195] Run: crio --version
	I0211 03:14:22.784271   63944 ssh_runner.go:195] Run: crio --version
	I0211 03:14:22.820326   63944 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0211 03:14:22.821510   63944 main.go:141] libmachine: (old-k8s-version-244815) Calling .GetIP
	I0211 03:14:22.827262   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:22.827752   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5e:6f:f7", ip: ""} in network mk-old-k8s-version-244815: {Iface:virbr1 ExpiryTime:2025-02-11 04:07:58 +0000 UTC Type:0 Mac:52:54:00:5e:6f:f7 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:old-k8s-version-244815 Clientid:01:52:54:00:5e:6f:f7}
	I0211 03:14:22.827777   63944 main.go:141] libmachine: (old-k8s-version-244815) DBG | domain old-k8s-version-244815 has defined IP address 192.168.39.206 and MAC address 52:54:00:5e:6f:f7 in network mk-old-k8s-version-244815
	I0211 03:14:22.828017   63944 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0211 03:14:22.833213   63944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 03:14:22.849336   63944 kubeadm.go:883] updating cluster {Name:old-k8s-version-244815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-244815 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0211 03:14:22.849470   63944 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0211 03:14:22.849529   63944 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:14:22.906748   63944 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0211 03:14:22.906823   63944 ssh_runner.go:195] Run: which lz4
	I0211 03:14:22.911308   63944 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0211 03:14:22.915804   63944 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0211 03:14:22.915853   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0211 03:14:24.389555   63944 crio.go:462] duration metric: took 1.478280191s to copy over tarball
	I0211 03:14:24.389636   63944 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0211 03:14:27.440136   63944 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.05046131s)
	I0211 03:14:27.440171   63944 crio.go:469] duration metric: took 3.050584237s to extract the tarball
	I0211 03:14:27.440179   63944 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0211 03:14:27.490525   63944 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:14:27.525157   63944 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0211 03:14:27.525180   63944 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0211 03:14:27.525247   63944 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:14:27.525258   63944 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:14:27.525268   63944 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:14:27.525298   63944 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:14:27.525320   63944 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:14:27.525348   63944 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0211 03:14:27.525463   63944 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0211 03:14:27.525479   63944 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0211 03:14:27.527261   63944 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:14:27.527340   63944 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:14:27.527469   63944 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0211 03:14:27.527596   63944 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:14:27.527699   63944 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:14:27.527816   63944 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:14:27.527889   63944 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0211 03:14:27.527919   63944 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0211 03:14:27.663288   63944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0211 03:14:27.666085   63944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:14:27.668171   63944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:14:27.672654   63944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0211 03:14:27.675294   63944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:14:27.680090   63944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:14:27.688436   63944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0211 03:14:27.802744   63944 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0211 03:14:27.802795   63944 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0211 03:14:27.802818   63944 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0211 03:14:27.802949   63944 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:14:27.802999   63944 ssh_runner.go:195] Run: which crictl
	I0211 03:14:27.802845   63944 ssh_runner.go:195] Run: which crictl
	I0211 03:14:27.824423   63944 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0211 03:14:27.824468   63944 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:14:27.824520   63944 ssh_runner.go:195] Run: which crictl
	I0211 03:14:27.838485   63944 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0211 03:14:27.838531   63944 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0211 03:14:27.838576   63944 ssh_runner.go:195] Run: which crictl
	I0211 03:14:27.841326   63944 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0211 03:14:27.841358   63944 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0211 03:14:27.841370   63944 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:14:27.841399   63944 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:14:27.841411   63944 ssh_runner.go:195] Run: which crictl
	I0211 03:14:27.841434   63944 ssh_runner.go:195] Run: which crictl
	I0211 03:14:27.856309   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0211 03:14:27.856362   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:14:27.856377   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:14:27.856434   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0211 03:14:27.856456   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:14:27.856477   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:14:27.856539   63944 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0211 03:14:27.856568   63944 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0211 03:14:27.856602   63944 ssh_runner.go:195] Run: which crictl
	I0211 03:14:28.002550   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0211 03:14:28.002592   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0211 03:14:28.002606   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:14:28.002674   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:14:28.002724   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0211 03:14:28.002789   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:14:28.002898   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:14:28.154562   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0211 03:14:28.154562   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0211 03:14:28.154655   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0211 03:14:28.187766   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0211 03:14:28.187817   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0211 03:14:28.187835   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0211 03:14:28.187849   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0211 03:14:28.292350   63944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0211 03:14:28.292429   63944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0211 03:14:28.292526   63944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0211 03:14:28.313691   63944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0211 03:14:28.327011   63944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0211 03:14:28.327111   63944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0211 03:14:28.332108   63944 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0211 03:14:28.363958   63944 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0211 03:14:28.476637   63944 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:14:28.619611   63944 cache_images.go:92] duration metric: took 1.094416071s to LoadCachedImages
	W0211 03:14:28.619716   63944 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20400-12456/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0211 03:14:28.619734   63944 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.20.0 crio true true} ...
	I0211 03:14:28.619846   63944 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-244815 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-244815 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0211 03:14:28.619927   63944 ssh_runner.go:195] Run: crio config
	I0211 03:14:28.663690   63944 cni.go:84] Creating CNI manager for ""
	I0211 03:14:28.663712   63944 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 03:14:28.663721   63944 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 03:14:28.663740   63944 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-244815 NodeName:old-k8s-version-244815 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0211 03:14:28.663898   63944 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-244815"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.206
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 03:14:28.663980   63944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0211 03:14:28.677537   63944 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 03:14:28.677600   63944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 03:14:28.687933   63944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0211 03:14:28.704670   63944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 03:14:28.722919   63944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0211 03:14:28.741929   63944 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I0211 03:14:28.745522   63944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 03:14:28.757061   63944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:14:28.871347   63944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:14:28.888106   63944 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815 for IP: 192.168.39.206
	I0211 03:14:28.888126   63944 certs.go:194] generating shared ca certs ...
	I0211 03:14:28.888145   63944 certs.go:226] acquiring lock for ca certs: {Name:mk14e70e4f3b98aff6eac535114852cc1d70eb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:14:28.888320   63944 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key
	I0211 03:14:28.888378   63944 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key
	I0211 03:14:28.888403   63944 certs.go:256] generating profile certs ...
	I0211 03:14:28.890734   63944 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/client.key
	I0211 03:14:28.890822   63944 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.key.717128d0
	I0211 03:14:28.890865   63944 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/proxy-client.key
	I0211 03:14:28.891036   63944 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem (1338 bytes)
	W0211 03:14:28.891072   63944 certs.go:480] ignoring /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645_empty.pem, impossibly tiny 0 bytes
	I0211 03:14:28.891087   63944 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem (1675 bytes)
	I0211 03:14:28.891121   63944 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem (1078 bytes)
	I0211 03:14:28.891154   63944 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem (1123 bytes)
	I0211 03:14:28.891181   63944 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem (1679 bytes)
	I0211 03:14:28.891242   63944 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:14:28.891945   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 03:14:28.930822   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 03:14:28.959809   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 03:14:28.987545   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0211 03:14:29.015487   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0211 03:14:29.045943   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0211 03:14:29.082698   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 03:14:29.118833   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/old-k8s-version-244815/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0211 03:14:29.156283   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem --> /usr/share/ca-certificates/19645.pem (1338 bytes)
	I0211 03:14:29.184336   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /usr/share/ca-certificates/196452.pem (1708 bytes)
	I0211 03:14:29.206693   63944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 03:14:29.229399   63944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 03:14:29.245717   63944 ssh_runner.go:195] Run: openssl version
	I0211 03:14:29.251328   63944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/196452.pem && ln -fs /usr/share/ca-certificates/196452.pem /etc/ssl/certs/196452.pem"
	I0211 03:14:29.261539   63944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/196452.pem
	I0211 03:14:29.265678   63944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 11 02:09 /usr/share/ca-certificates/196452.pem
	I0211 03:14:29.265738   63944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/196452.pem
	I0211 03:14:29.271358   63944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/196452.pem /etc/ssl/certs/3ec20f2e.0"
	I0211 03:14:29.281916   63944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 03:14:29.292586   63944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:14:29.297109   63944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:14:29.297151   63944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:14:29.302559   63944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 03:14:29.314350   63944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19645.pem && ln -fs /usr/share/ca-certificates/19645.pem /etc/ssl/certs/19645.pem"
	I0211 03:14:29.335943   63944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19645.pem
	I0211 03:14:29.340861   63944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 11 02:09 /usr/share/ca-certificates/19645.pem
	I0211 03:14:29.340919   63944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19645.pem
	I0211 03:14:29.346399   63944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19645.pem /etc/ssl/certs/51391683.0"
	I0211 03:14:29.357121   63944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 03:14:29.361457   63944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0211 03:14:29.367119   63944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0211 03:14:29.372654   63944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0211 03:14:29.378136   63944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0211 03:14:29.383779   63944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0211 03:14:29.391369   63944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0211 03:14:29.398937   63944 kubeadm.go:392] StartCluster: {Name:old-k8s-version-244815 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-244815 Namespace:de
fault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:14:29.399019   63944 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 03:14:29.399060   63944 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 03:14:29.440982   63944 cri.go:89] found id: ""
	I0211 03:14:29.441051   63944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0211 03:14:29.451514   63944 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0211 03:14:29.451550   63944 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0211 03:14:29.451603   63944 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0211 03:14:29.460847   63944 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0211 03:14:29.461577   63944 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-244815" does not appear in /home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:14:29.461964   63944 kubeconfig.go:62] /home/jenkins/minikube-integration/20400-12456/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-244815" cluster setting kubeconfig missing "old-k8s-version-244815" context setting]
	I0211 03:14:29.462518   63944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/kubeconfig: {Name:mkd961d61f748b29ba3bb0ad55f8216d88f98444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:14:29.547209   63944 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0211 03:14:29.558466   63944 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.206
	I0211 03:14:29.558504   63944 kubeadm.go:1160] stopping kube-system containers ...
	I0211 03:14:29.558517   63944 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0211 03:14:29.558570   63944 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 03:14:29.592518   63944 cri.go:89] found id: ""
	I0211 03:14:29.592590   63944 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0211 03:14:29.608021   63944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 03:14:29.618685   63944 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 03:14:29.618710   63944 kubeadm.go:157] found existing configuration files:
	
	I0211 03:14:29.618762   63944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 03:14:29.627778   63944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 03:14:29.627844   63944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 03:14:29.637683   63944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 03:14:29.646246   63944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 03:14:29.646309   63944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 03:14:29.655294   63944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 03:14:29.664672   63944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 03:14:29.664738   63944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 03:14:29.674023   63944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 03:14:29.683265   63944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 03:14:29.683325   63944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 03:14:29.692909   63944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 03:14:29.702397   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0211 03:14:29.834275   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0211 03:14:30.516102   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0211 03:14:30.791682   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0211 03:14:30.901644   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0211 03:14:31.017147   63944 api_server.go:52] waiting for apiserver process to appear ...
	I0211 03:14:31.017253   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:31.517521   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:32.017357   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:32.517810   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:33.018045   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:33.518389   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:34.017362   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:34.517715   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:35.017786   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:35.517469   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:36.018065   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:36.518136   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:37.017944   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:37.517401   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:38.018356   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:38.517747   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:39.018330   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:39.518356   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:40.018001   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:40.518112   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:41.017701   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:41.518113   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:42.017334   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:42.518230   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:43.018256   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:43.517767   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:44.017454   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:44.518179   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:45.018287   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:45.517313   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:46.017904   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:46.517849   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:47.018266   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:47.517359   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:48.017541   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:48.517969   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:49.018002   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:49.517500   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:50.017994   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:50.518147   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:51.017869   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:51.517832   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:52.017346   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:52.517557   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:53.017329   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:53.518231   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:54.017989   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:54.517396   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:55.017571   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:55.517395   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:56.018253   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:56.517695   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:57.018035   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:57.518107   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:58.017560   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:58.518131   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:59.018336   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:14:59.518260   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:00.017451   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:00.517510   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:01.018010   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:01.518261   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:02.017459   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:02.517603   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:03.018154   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:03.517391   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:04.017723   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:04.518091   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:05.017381   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:05.517582   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:06.018043   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:06.517766   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:07.017940   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:07.517344   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:08.017473   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:08.517511   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:09.017410   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:09.517359   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:10.018244   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:10.517905   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:11.018193   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:11.517335   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:12.018332   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:12.518322   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:13.017950   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:13.517488   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:14.017641   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:14.517850   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:15.017947   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:15.517372   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:16.018173   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:16.517324   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:17.017326   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:17.517375   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:18.017977   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:18.517732   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:19.017459   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:19.518192   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:20.017839   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:20.517716   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:21.018207   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:21.517381   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:22.017510   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:22.518296   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:23.017349   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:23.518225   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:24.017510   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:24.517497   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:25.018194   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:25.518178   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:26.017285   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:26.517930   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:27.018257   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:27.518332   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:28.018155   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:28.518165   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:29.017327   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:29.518190   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:30.018176   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:30.518007   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:31.017660   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:15:31.017745   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:15:31.054432   63944 cri.go:89] found id: ""
	I0211 03:15:31.054455   63944 logs.go:282] 0 containers: []
	W0211 03:15:31.054470   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:15:31.054476   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:15:31.054541   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:15:31.091665   63944 cri.go:89] found id: ""
	I0211 03:15:31.091693   63944 logs.go:282] 0 containers: []
	W0211 03:15:31.091700   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:15:31.091706   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:15:31.091749   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:15:31.125617   63944 cri.go:89] found id: ""
	I0211 03:15:31.125642   63944 logs.go:282] 0 containers: []
	W0211 03:15:31.125651   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:15:31.125659   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:15:31.125723   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:15:31.161077   63944 cri.go:89] found id: ""
	I0211 03:15:31.161111   63944 logs.go:282] 0 containers: []
	W0211 03:15:31.161121   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:15:31.161127   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:15:31.161198   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:15:31.194602   63944 cri.go:89] found id: ""
	I0211 03:15:31.194633   63944 logs.go:282] 0 containers: []
	W0211 03:15:31.194644   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:15:31.194652   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:15:31.194711   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:15:31.225988   63944 cri.go:89] found id: ""
	I0211 03:15:31.226012   63944 logs.go:282] 0 containers: []
	W0211 03:15:31.226022   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:15:31.226029   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:15:31.226091   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:15:31.265223   63944 cri.go:89] found id: ""
	I0211 03:15:31.265252   63944 logs.go:282] 0 containers: []
	W0211 03:15:31.265260   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:15:31.265266   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:15:31.265313   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:15:31.297559   63944 cri.go:89] found id: ""
	I0211 03:15:31.297585   63944 logs.go:282] 0 containers: []
	W0211 03:15:31.297592   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:15:31.297600   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:15:31.297611   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:15:31.344012   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:15:31.344042   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:15:31.356704   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:15:31.356727   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:15:31.470255   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:15:31.470283   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:15:31.470299   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:15:31.536765   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:15:31.536805   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:15:34.077693   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:34.090364   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:15:34.090443   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:15:34.124713   63944 cri.go:89] found id: ""
	I0211 03:15:34.124746   63944 logs.go:282] 0 containers: []
	W0211 03:15:34.124758   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:15:34.124765   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:15:34.124824   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:15:34.160390   63944 cri.go:89] found id: ""
	I0211 03:15:34.160440   63944 logs.go:282] 0 containers: []
	W0211 03:15:34.160456   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:15:34.160464   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:15:34.160523   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:15:34.195486   63944 cri.go:89] found id: ""
	I0211 03:15:34.195523   63944 logs.go:282] 0 containers: []
	W0211 03:15:34.195534   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:15:34.195541   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:15:34.195599   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:15:34.225724   63944 cri.go:89] found id: ""
	I0211 03:15:34.225748   63944 logs.go:282] 0 containers: []
	W0211 03:15:34.225757   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:15:34.225764   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:15:34.225820   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:15:34.258833   63944 cri.go:89] found id: ""
	I0211 03:15:34.258863   63944 logs.go:282] 0 containers: []
	W0211 03:15:34.258886   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:15:34.258895   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:15:34.258972   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:15:34.291017   63944 cri.go:89] found id: ""
	I0211 03:15:34.291046   63944 logs.go:282] 0 containers: []
	W0211 03:15:34.291058   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:15:34.291066   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:15:34.291111   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:15:34.323024   63944 cri.go:89] found id: ""
	I0211 03:15:34.323045   63944 logs.go:282] 0 containers: []
	W0211 03:15:34.323052   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:15:34.323058   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:15:34.323100   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:15:34.353970   63944 cri.go:89] found id: ""
	I0211 03:15:34.353998   63944 logs.go:282] 0 containers: []
	W0211 03:15:34.354006   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:15:34.354015   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:15:34.354026   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:15:34.390792   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:15:34.390821   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:15:34.440639   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:15:34.440671   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:15:34.453948   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:15:34.453973   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:15:34.523274   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:15:34.523299   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:15:34.523323   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:15:37.103721   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:37.119166   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:15:37.119236   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:15:37.157362   63944 cri.go:89] found id: ""
	I0211 03:15:37.157391   63944 logs.go:282] 0 containers: []
	W0211 03:15:37.157399   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:15:37.157412   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:15:37.157471   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:15:37.198072   63944 cri.go:89] found id: ""
	I0211 03:15:37.198100   63944 logs.go:282] 0 containers: []
	W0211 03:15:37.198108   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:15:37.198124   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:15:37.198176   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:15:37.238217   63944 cri.go:89] found id: ""
	I0211 03:15:37.238240   63944 logs.go:282] 0 containers: []
	W0211 03:15:37.238247   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:15:37.238253   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:15:37.238298   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:15:37.270254   63944 cri.go:89] found id: ""
	I0211 03:15:37.270281   63944 logs.go:282] 0 containers: []
	W0211 03:15:37.270292   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:15:37.270300   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:15:37.270353   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:15:37.302133   63944 cri.go:89] found id: ""
	I0211 03:15:37.302163   63944 logs.go:282] 0 containers: []
	W0211 03:15:37.302172   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:15:37.302178   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:15:37.302253   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:15:37.335509   63944 cri.go:89] found id: ""
	I0211 03:15:37.335541   63944 logs.go:282] 0 containers: []
	W0211 03:15:37.335552   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:15:37.335560   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:15:37.335615   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:15:37.371908   63944 cri.go:89] found id: ""
	I0211 03:15:37.371933   63944 logs.go:282] 0 containers: []
	W0211 03:15:37.371965   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:15:37.371976   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:15:37.372030   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:15:37.402664   63944 cri.go:89] found id: ""
	I0211 03:15:37.402696   63944 logs.go:282] 0 containers: []
	W0211 03:15:37.402707   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:15:37.402718   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:15:37.402731   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:15:37.451789   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:15:37.451821   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:15:37.464598   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:15:37.464631   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:15:37.529644   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:15:37.529665   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:15:37.529677   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:15:37.604040   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:15:37.604081   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:15:40.148669   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:40.160731   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:15:40.160799   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:15:40.191713   63944 cri.go:89] found id: ""
	I0211 03:15:40.191739   63944 logs.go:282] 0 containers: []
	W0211 03:15:40.191748   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:15:40.191755   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:15:40.191811   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:15:40.225217   63944 cri.go:89] found id: ""
	I0211 03:15:40.225245   63944 logs.go:282] 0 containers: []
	W0211 03:15:40.225256   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:15:40.225265   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:15:40.225325   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:15:40.255465   63944 cri.go:89] found id: ""
	I0211 03:15:40.255499   63944 logs.go:282] 0 containers: []
	W0211 03:15:40.255510   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:15:40.255517   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:15:40.255575   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:15:40.291341   63944 cri.go:89] found id: ""
	I0211 03:15:40.291367   63944 logs.go:282] 0 containers: []
	W0211 03:15:40.291376   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:15:40.291384   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:15:40.291453   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:15:40.331341   63944 cri.go:89] found id: ""
	I0211 03:15:40.331372   63944 logs.go:282] 0 containers: []
	W0211 03:15:40.331379   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:15:40.331385   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:15:40.331448   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:15:40.370155   63944 cri.go:89] found id: ""
	I0211 03:15:40.370184   63944 logs.go:282] 0 containers: []
	W0211 03:15:40.370195   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:15:40.370203   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:15:40.370267   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:15:40.410501   63944 cri.go:89] found id: ""
	I0211 03:15:40.410531   63944 logs.go:282] 0 containers: []
	W0211 03:15:40.410541   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:15:40.410550   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:15:40.410607   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:15:40.440996   63944 cri.go:89] found id: ""
	I0211 03:15:40.441022   63944 logs.go:282] 0 containers: []
	W0211 03:15:40.441033   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:15:40.441042   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:15:40.441056   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:15:40.513145   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:15:40.513173   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:15:40.513191   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:15:40.599689   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:15:40.599720   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:15:40.637687   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:15:40.637710   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:15:40.691259   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:15:40.691302   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:15:43.205741   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:43.219949   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:15:43.220031   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:15:43.257062   63944 cri.go:89] found id: ""
	I0211 03:15:43.257098   63944 logs.go:282] 0 containers: []
	W0211 03:15:43.257109   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:15:43.257117   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:15:43.257184   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:15:43.289414   63944 cri.go:89] found id: ""
	I0211 03:15:43.289449   63944 logs.go:282] 0 containers: []
	W0211 03:15:43.289458   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:15:43.289464   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:15:43.289520   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:15:43.323217   63944 cri.go:89] found id: ""
	I0211 03:15:43.323247   63944 logs.go:282] 0 containers: []
	W0211 03:15:43.323258   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:15:43.323265   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:15:43.323325   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:15:43.354739   63944 cri.go:89] found id: ""
	I0211 03:15:43.354773   63944 logs.go:282] 0 containers: []
	W0211 03:15:43.354784   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:15:43.354792   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:15:43.354847   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:15:43.393346   63944 cri.go:89] found id: ""
	I0211 03:15:43.393379   63944 logs.go:282] 0 containers: []
	W0211 03:15:43.393391   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:15:43.393399   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:15:43.393468   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:15:43.425896   63944 cri.go:89] found id: ""
	I0211 03:15:43.425930   63944 logs.go:282] 0 containers: []
	W0211 03:15:43.425943   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:15:43.425951   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:15:43.426017   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:15:43.456463   63944 cri.go:89] found id: ""
	I0211 03:15:43.456492   63944 logs.go:282] 0 containers: []
	W0211 03:15:43.456503   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:15:43.456511   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:15:43.456571   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:15:43.488869   63944 cri.go:89] found id: ""
	I0211 03:15:43.488894   63944 logs.go:282] 0 containers: []
	W0211 03:15:43.488905   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:15:43.488920   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:15:43.488933   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:15:43.502031   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:15:43.502062   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:15:43.568948   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:15:43.568971   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:15:43.568987   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:15:43.646427   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:15:43.646463   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:15:43.682333   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:15:43.682361   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:15:46.235818   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:46.248405   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:15:46.248463   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:15:46.280426   63944 cri.go:89] found id: ""
	I0211 03:15:46.280450   63944 logs.go:282] 0 containers: []
	W0211 03:15:46.280458   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:15:46.280464   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:15:46.280520   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:15:46.314833   63944 cri.go:89] found id: ""
	I0211 03:15:46.314863   63944 logs.go:282] 0 containers: []
	W0211 03:15:46.314889   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:15:46.314897   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:15:46.314958   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:15:46.351733   63944 cri.go:89] found id: ""
	I0211 03:15:46.351758   63944 logs.go:282] 0 containers: []
	W0211 03:15:46.351766   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:15:46.351771   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:15:46.351817   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:15:46.386578   63944 cri.go:89] found id: ""
	I0211 03:15:46.386602   63944 logs.go:282] 0 containers: []
	W0211 03:15:46.386609   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:15:46.386615   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:15:46.386664   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:15:46.422739   63944 cri.go:89] found id: ""
	I0211 03:15:46.422766   63944 logs.go:282] 0 containers: []
	W0211 03:15:46.422778   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:15:46.422786   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:15:46.422851   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:15:46.455475   63944 cri.go:89] found id: ""
	I0211 03:15:46.455506   63944 logs.go:282] 0 containers: []
	W0211 03:15:46.455514   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:15:46.455524   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:15:46.455573   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:15:46.490474   63944 cri.go:89] found id: ""
	I0211 03:15:46.490519   63944 logs.go:282] 0 containers: []
	W0211 03:15:46.490531   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:15:46.490542   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:15:46.490611   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:15:46.525403   63944 cri.go:89] found id: ""
	I0211 03:15:46.525437   63944 logs.go:282] 0 containers: []
	W0211 03:15:46.525448   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:15:46.525459   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:15:46.525472   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:15:46.589185   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:15:46.589212   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:15:46.589232   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:15:46.663652   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:15:46.663685   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:15:46.702504   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:15:46.702542   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:15:46.754182   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:15:46.754216   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:15:49.268982   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:49.281091   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:15:49.281170   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:15:49.313350   63944 cri.go:89] found id: ""
	I0211 03:15:49.313379   63944 logs.go:282] 0 containers: []
	W0211 03:15:49.313389   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:15:49.313397   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:15:49.313459   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:15:49.347672   63944 cri.go:89] found id: ""
	I0211 03:15:49.347694   63944 logs.go:282] 0 containers: []
	W0211 03:15:49.347702   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:15:49.347707   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:15:49.347752   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:15:49.381673   63944 cri.go:89] found id: ""
	I0211 03:15:49.381705   63944 logs.go:282] 0 containers: []
	W0211 03:15:49.381714   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:15:49.381721   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:15:49.381769   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:15:49.413500   63944 cri.go:89] found id: ""
	I0211 03:15:49.413534   63944 logs.go:282] 0 containers: []
	W0211 03:15:49.413545   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:15:49.413553   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:15:49.413621   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:15:49.444797   63944 cri.go:89] found id: ""
	I0211 03:15:49.444827   63944 logs.go:282] 0 containers: []
	W0211 03:15:49.444835   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:15:49.444841   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:15:49.444891   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:15:49.474556   63944 cri.go:89] found id: ""
	I0211 03:15:49.474582   63944 logs.go:282] 0 containers: []
	W0211 03:15:49.474593   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:15:49.474600   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:15:49.474661   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:15:49.505365   63944 cri.go:89] found id: ""
	I0211 03:15:49.505397   63944 logs.go:282] 0 containers: []
	W0211 03:15:49.505405   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:15:49.505412   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:15:49.505463   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:15:49.536570   63944 cri.go:89] found id: ""
	I0211 03:15:49.536602   63944 logs.go:282] 0 containers: []
	W0211 03:15:49.536614   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:15:49.536625   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:15:49.536637   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:15:49.587922   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:15:49.587948   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:15:49.600656   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:15:49.600683   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:15:49.668665   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:15:49.668694   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:15:49.668709   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:15:49.746247   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:15:49.746284   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:15:52.283882   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:52.296678   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:15:52.296743   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:15:52.335876   63944 cri.go:89] found id: ""
	I0211 03:15:52.335902   63944 logs.go:282] 0 containers: []
	W0211 03:15:52.335913   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:15:52.335921   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:15:52.335984   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:15:52.368392   63944 cri.go:89] found id: ""
	I0211 03:15:52.368435   63944 logs.go:282] 0 containers: []
	W0211 03:15:52.368446   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:15:52.368454   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:15:52.368508   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:15:52.404974   63944 cri.go:89] found id: ""
	I0211 03:15:52.405006   63944 logs.go:282] 0 containers: []
	W0211 03:15:52.405017   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:15:52.405025   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:15:52.405082   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:15:52.440282   63944 cri.go:89] found id: ""
	I0211 03:15:52.440310   63944 logs.go:282] 0 containers: []
	W0211 03:15:52.440321   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:15:52.440327   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:15:52.440384   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:15:52.472711   63944 cri.go:89] found id: ""
	I0211 03:15:52.472737   63944 logs.go:282] 0 containers: []
	W0211 03:15:52.472745   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:15:52.472750   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:15:52.472796   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:15:52.510747   63944 cri.go:89] found id: ""
	I0211 03:15:52.510782   63944 logs.go:282] 0 containers: []
	W0211 03:15:52.510794   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:15:52.510803   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:15:52.510864   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:15:52.549352   63944 cri.go:89] found id: ""
	I0211 03:15:52.549380   63944 logs.go:282] 0 containers: []
	W0211 03:15:52.549391   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:15:52.549397   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:15:52.549461   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:15:52.582652   63944 cri.go:89] found id: ""
	I0211 03:15:52.582683   63944 logs.go:282] 0 containers: []
	W0211 03:15:52.582695   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:15:52.582706   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:15:52.582719   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:15:52.666540   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:15:52.666577   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:15:52.704970   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:15:52.705003   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:15:52.755263   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:15:52.755297   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:15:52.769096   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:15:52.769144   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:15:52.834270   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:15:55.335003   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:55.347834   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:15:55.347907   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:15:55.385004   63944 cri.go:89] found id: ""
	I0211 03:15:55.385033   63944 logs.go:282] 0 containers: []
	W0211 03:15:55.385044   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:15:55.385052   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:15:55.385111   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:15:55.420845   63944 cri.go:89] found id: ""
	I0211 03:15:55.420881   63944 logs.go:282] 0 containers: []
	W0211 03:15:55.420892   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:15:55.420899   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:15:55.420965   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:15:55.452186   63944 cri.go:89] found id: ""
	I0211 03:15:55.452214   63944 logs.go:282] 0 containers: []
	W0211 03:15:55.452226   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:15:55.452233   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:15:55.452292   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:15:55.485579   63944 cri.go:89] found id: ""
	I0211 03:15:55.485605   63944 logs.go:282] 0 containers: []
	W0211 03:15:55.485613   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:15:55.485619   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:15:55.485675   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:15:55.517798   63944 cri.go:89] found id: ""
	I0211 03:15:55.517820   63944 logs.go:282] 0 containers: []
	W0211 03:15:55.517827   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:15:55.517832   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:15:55.517877   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:15:55.550589   63944 cri.go:89] found id: ""
	I0211 03:15:55.550615   63944 logs.go:282] 0 containers: []
	W0211 03:15:55.550622   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:15:55.550628   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:15:55.550671   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:15:55.580858   63944 cri.go:89] found id: ""
	I0211 03:15:55.580885   63944 logs.go:282] 0 containers: []
	W0211 03:15:55.580891   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:15:55.580898   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:15:55.580955   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:15:55.611231   63944 cri.go:89] found id: ""
	I0211 03:15:55.611255   63944 logs.go:282] 0 containers: []
	W0211 03:15:55.611263   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:15:55.611270   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:15:55.611281   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:15:55.663299   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:15:55.663331   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:15:55.676223   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:15:55.676252   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:15:55.742727   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:15:55.742750   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:15:55.742761   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:15:55.821106   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:15:55.821139   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:15:58.360152   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:15:58.372466   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:15:58.372526   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:15:58.403767   63944 cri.go:89] found id: ""
	I0211 03:15:58.403798   63944 logs.go:282] 0 containers: []
	W0211 03:15:58.403810   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:15:58.403817   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:15:58.403882   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:15:58.438986   63944 cri.go:89] found id: ""
	I0211 03:15:58.439007   63944 logs.go:282] 0 containers: []
	W0211 03:15:58.439014   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:15:58.439020   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:15:58.439069   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:15:58.472262   63944 cri.go:89] found id: ""
	I0211 03:15:58.472286   63944 logs.go:282] 0 containers: []
	W0211 03:15:58.472294   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:15:58.472301   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:15:58.472361   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:15:58.503967   63944 cri.go:89] found id: ""
	I0211 03:15:58.504002   63944 logs.go:282] 0 containers: []
	W0211 03:15:58.504011   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:15:58.504016   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:15:58.504063   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:15:58.538055   63944 cri.go:89] found id: ""
	I0211 03:15:58.538082   63944 logs.go:282] 0 containers: []
	W0211 03:15:58.538091   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:15:58.538097   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:15:58.538157   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:15:58.573267   63944 cri.go:89] found id: ""
	I0211 03:15:58.573298   63944 logs.go:282] 0 containers: []
	W0211 03:15:58.573308   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:15:58.573316   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:15:58.573381   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:15:58.606161   63944 cri.go:89] found id: ""
	I0211 03:15:58.606199   63944 logs.go:282] 0 containers: []
	W0211 03:15:58.606211   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:15:58.606219   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:15:58.606272   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:15:58.640057   63944 cri.go:89] found id: ""
	I0211 03:15:58.640081   63944 logs.go:282] 0 containers: []
	W0211 03:15:58.640091   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:15:58.640102   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:15:58.640115   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:15:58.653357   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:15:58.653381   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:15:58.717872   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:15:58.717900   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:15:58.717912   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:15:58.793904   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:15:58.793951   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:15:58.838062   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:15:58.838090   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:01.389008   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:01.401690   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:01.401759   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:01.435327   63944 cri.go:89] found id: ""
	I0211 03:16:01.435352   63944 logs.go:282] 0 containers: []
	W0211 03:16:01.435360   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:01.435366   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:01.435411   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:01.469142   63944 cri.go:89] found id: ""
	I0211 03:16:01.469164   63944 logs.go:282] 0 containers: []
	W0211 03:16:01.469172   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:01.469178   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:01.469226   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:01.501413   63944 cri.go:89] found id: ""
	I0211 03:16:01.501454   63944 logs.go:282] 0 containers: []
	W0211 03:16:01.501466   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:01.501477   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:01.501547   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:01.533395   63944 cri.go:89] found id: ""
	I0211 03:16:01.533428   63944 logs.go:282] 0 containers: []
	W0211 03:16:01.533441   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:01.533448   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:01.533503   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:01.572251   63944 cri.go:89] found id: ""
	I0211 03:16:01.572287   63944 logs.go:282] 0 containers: []
	W0211 03:16:01.572298   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:01.572306   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:01.572368   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:01.605088   63944 cri.go:89] found id: ""
	I0211 03:16:01.605111   63944 logs.go:282] 0 containers: []
	W0211 03:16:01.605118   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:01.605124   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:01.605168   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:01.638279   63944 cri.go:89] found id: ""
	I0211 03:16:01.638302   63944 logs.go:282] 0 containers: []
	W0211 03:16:01.638308   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:01.638314   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:01.638359   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:01.670517   63944 cri.go:89] found id: ""
	I0211 03:16:01.670543   63944 logs.go:282] 0 containers: []
	W0211 03:16:01.670550   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:01.670558   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:01.670568   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:01.720554   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:01.720586   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:01.733737   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:01.733763   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:01.800604   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:01.800631   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:01.800645   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:01.878057   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:01.878090   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:04.416112   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:04.428776   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:04.428839   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:04.461004   63944 cri.go:89] found id: ""
	I0211 03:16:04.461036   63944 logs.go:282] 0 containers: []
	W0211 03:16:04.461047   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:04.461054   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:04.461122   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:04.499632   63944 cri.go:89] found id: ""
	I0211 03:16:04.499659   63944 logs.go:282] 0 containers: []
	W0211 03:16:04.499669   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:04.499676   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:04.499735   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:04.535799   63944 cri.go:89] found id: ""
	I0211 03:16:04.535829   63944 logs.go:282] 0 containers: []
	W0211 03:16:04.535836   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:04.535842   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:04.535901   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:04.574097   63944 cri.go:89] found id: ""
	I0211 03:16:04.574126   63944 logs.go:282] 0 containers: []
	W0211 03:16:04.574137   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:04.574145   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:04.574206   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:04.607295   63944 cri.go:89] found id: ""
	I0211 03:16:04.607327   63944 logs.go:282] 0 containers: []
	W0211 03:16:04.607339   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:04.607355   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:04.607430   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:04.646695   63944 cri.go:89] found id: ""
	I0211 03:16:04.646722   63944 logs.go:282] 0 containers: []
	W0211 03:16:04.646731   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:04.646739   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:04.646800   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:04.687798   63944 cri.go:89] found id: ""
	I0211 03:16:04.687823   63944 logs.go:282] 0 containers: []
	W0211 03:16:04.687831   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:04.687837   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:04.687897   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:04.728755   63944 cri.go:89] found id: ""
	I0211 03:16:04.728783   63944 logs.go:282] 0 containers: []
	W0211 03:16:04.728793   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:04.728803   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:04.728821   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:04.812328   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:04.812351   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:04.812365   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:04.889037   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:04.889071   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:04.934969   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:04.934995   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:04.988276   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:04.988307   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:07.502017   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:07.515636   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:07.515703   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:07.549623   63944 cri.go:89] found id: ""
	I0211 03:16:07.549657   63944 logs.go:282] 0 containers: []
	W0211 03:16:07.549668   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:07.549677   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:07.549742   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:07.583249   63944 cri.go:89] found id: ""
	I0211 03:16:07.583274   63944 logs.go:282] 0 containers: []
	W0211 03:16:07.583283   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:07.583290   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:07.583345   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:07.617220   63944 cri.go:89] found id: ""
	I0211 03:16:07.617238   63944 logs.go:282] 0 containers: []
	W0211 03:16:07.617244   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:07.617249   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:07.617292   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:07.652754   63944 cri.go:89] found id: ""
	I0211 03:16:07.652778   63944 logs.go:282] 0 containers: []
	W0211 03:16:07.652785   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:07.652791   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:07.652837   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:07.691881   63944 cri.go:89] found id: ""
	I0211 03:16:07.691911   63944 logs.go:282] 0 containers: []
	W0211 03:16:07.691921   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:07.691929   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:07.691994   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:07.732078   63944 cri.go:89] found id: ""
	I0211 03:16:07.732101   63944 logs.go:282] 0 containers: []
	W0211 03:16:07.732110   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:07.732115   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:07.732161   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:07.765288   63944 cri.go:89] found id: ""
	I0211 03:16:07.765309   63944 logs.go:282] 0 containers: []
	W0211 03:16:07.765317   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:07.765331   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:07.765389   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:07.798860   63944 cri.go:89] found id: ""
	I0211 03:16:07.798902   63944 logs.go:282] 0 containers: []
	W0211 03:16:07.798912   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:07.798922   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:07.798936   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:07.862757   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:07.862797   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:07.876907   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:07.876932   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:07.950420   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:07.950445   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:07.950461   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:08.034965   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:08.035009   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:10.581454   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:10.594196   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:10.594273   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:10.625073   63944 cri.go:89] found id: ""
	I0211 03:16:10.625101   63944 logs.go:282] 0 containers: []
	W0211 03:16:10.625112   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:10.625120   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:10.625181   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:10.655616   63944 cri.go:89] found id: ""
	I0211 03:16:10.655644   63944 logs.go:282] 0 containers: []
	W0211 03:16:10.655655   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:10.655663   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:10.655725   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:10.689077   63944 cri.go:89] found id: ""
	I0211 03:16:10.689105   63944 logs.go:282] 0 containers: []
	W0211 03:16:10.689114   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:10.689122   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:10.689189   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:10.721016   63944 cri.go:89] found id: ""
	I0211 03:16:10.721046   63944 logs.go:282] 0 containers: []
	W0211 03:16:10.721058   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:10.721066   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:10.721130   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:10.753974   63944 cri.go:89] found id: ""
	I0211 03:16:10.754005   63944 logs.go:282] 0 containers: []
	W0211 03:16:10.754016   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:10.754024   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:10.754091   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:10.784754   63944 cri.go:89] found id: ""
	I0211 03:16:10.784779   63944 logs.go:282] 0 containers: []
	W0211 03:16:10.784787   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:10.784794   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:10.784858   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:10.817232   63944 cri.go:89] found id: ""
	I0211 03:16:10.817254   63944 logs.go:282] 0 containers: []
	W0211 03:16:10.817261   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:10.817266   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:10.817325   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:10.848886   63944 cri.go:89] found id: ""
	I0211 03:16:10.848915   63944 logs.go:282] 0 containers: []
	W0211 03:16:10.848923   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:10.848931   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:10.848942   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:10.926922   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:10.926963   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:10.963712   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:10.963738   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:11.013594   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:11.013622   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:11.026283   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:11.026308   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:11.099306   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:13.600057   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:13.612798   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:13.612849   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:13.645939   63944 cri.go:89] found id: ""
	I0211 03:16:13.645963   63944 logs.go:282] 0 containers: []
	W0211 03:16:13.645973   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:13.645981   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:13.646038   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:13.676642   63944 cri.go:89] found id: ""
	I0211 03:16:13.676666   63944 logs.go:282] 0 containers: []
	W0211 03:16:13.676673   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:13.676679   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:13.676725   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:13.708310   63944 cri.go:89] found id: ""
	I0211 03:16:13.708340   63944 logs.go:282] 0 containers: []
	W0211 03:16:13.708351   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:13.708358   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:13.708415   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:13.741844   63944 cri.go:89] found id: ""
	I0211 03:16:13.741869   63944 logs.go:282] 0 containers: []
	W0211 03:16:13.741879   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:13.741886   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:13.741940   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:13.777331   63944 cri.go:89] found id: ""
	I0211 03:16:13.777351   63944 logs.go:282] 0 containers: []
	W0211 03:16:13.777357   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:13.777362   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:13.777413   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:13.811932   63944 cri.go:89] found id: ""
	I0211 03:16:13.811954   63944 logs.go:282] 0 containers: []
	W0211 03:16:13.811961   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:13.811967   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:13.812017   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:13.850892   63944 cri.go:89] found id: ""
	I0211 03:16:13.850922   63944 logs.go:282] 0 containers: []
	W0211 03:16:13.850932   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:13.850939   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:13.850999   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:13.887736   63944 cri.go:89] found id: ""
	I0211 03:16:13.887762   63944 logs.go:282] 0 containers: []
	W0211 03:16:13.887772   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:13.887782   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:13.887797   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:13.949161   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:13.949195   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:13.964629   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:13.964665   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:14.028679   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:14.028706   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:14.028719   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:14.110712   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:14.110754   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:16.659559   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:16.672484   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:16.672564   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:16.707991   63944 cri.go:89] found id: ""
	I0211 03:16:16.708026   63944 logs.go:282] 0 containers: []
	W0211 03:16:16.708040   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:16.708048   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:16.708117   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:16.741518   63944 cri.go:89] found id: ""
	I0211 03:16:16.741546   63944 logs.go:282] 0 containers: []
	W0211 03:16:16.741553   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:16.741559   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:16.741623   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:16.774379   63944 cri.go:89] found id: ""
	I0211 03:16:16.774405   63944 logs.go:282] 0 containers: []
	W0211 03:16:16.774416   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:16.774423   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:16.774486   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:16.809104   63944 cri.go:89] found id: ""
	I0211 03:16:16.809145   63944 logs.go:282] 0 containers: []
	W0211 03:16:16.809157   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:16.809164   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:16.809235   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:16.842486   63944 cri.go:89] found id: ""
	I0211 03:16:16.842515   63944 logs.go:282] 0 containers: []
	W0211 03:16:16.842526   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:16.842534   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:16.842595   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:16.880137   63944 cri.go:89] found id: ""
	I0211 03:16:16.880170   63944 logs.go:282] 0 containers: []
	W0211 03:16:16.880182   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:16.880190   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:16.880260   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:16.912702   63944 cri.go:89] found id: ""
	I0211 03:16:16.912727   63944 logs.go:282] 0 containers: []
	W0211 03:16:16.912734   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:16.912739   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:16.912796   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:16.944639   63944 cri.go:89] found id: ""
	I0211 03:16:16.944665   63944 logs.go:282] 0 containers: []
	W0211 03:16:16.944673   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:16.944680   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:16.944695   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:17.036577   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:17.036612   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:17.072597   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:17.072631   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:17.128616   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:17.128646   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:17.141291   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:17.141317   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:17.207502   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:19.708280   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:19.724640   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:19.724702   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:19.762892   63944 cri.go:89] found id: ""
	I0211 03:16:19.762922   63944 logs.go:282] 0 containers: []
	W0211 03:16:19.762933   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:19.762941   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:19.763001   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:19.794767   63944 cri.go:89] found id: ""
	I0211 03:16:19.794800   63944 logs.go:282] 0 containers: []
	W0211 03:16:19.794811   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:19.794818   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:19.794894   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:19.827288   63944 cri.go:89] found id: ""
	I0211 03:16:19.827320   63944 logs.go:282] 0 containers: []
	W0211 03:16:19.827330   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:19.827337   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:19.827396   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:19.860177   63944 cri.go:89] found id: ""
	I0211 03:16:19.860201   63944 logs.go:282] 0 containers: []
	W0211 03:16:19.860210   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:19.860215   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:19.860273   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:19.898949   63944 cri.go:89] found id: ""
	I0211 03:16:19.898977   63944 logs.go:282] 0 containers: []
	W0211 03:16:19.898986   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:19.898994   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:19.899055   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:19.960763   63944 cri.go:89] found id: ""
	I0211 03:16:19.960796   63944 logs.go:282] 0 containers: []
	W0211 03:16:19.960806   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:19.960814   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:19.960872   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:19.998590   63944 cri.go:89] found id: ""
	I0211 03:16:19.998618   63944 logs.go:282] 0 containers: []
	W0211 03:16:19.998629   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:19.998637   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:19.998701   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:20.031944   63944 cri.go:89] found id: ""
	I0211 03:16:20.031978   63944 logs.go:282] 0 containers: []
	W0211 03:16:20.031989   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:20.031997   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:20.032009   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:20.070779   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:20.070810   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:20.123103   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:20.123136   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:20.135979   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:20.136007   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:20.209061   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:20.209086   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:20.209105   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:22.789098   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:22.807195   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:22.807277   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:22.851913   63944 cri.go:89] found id: ""
	I0211 03:16:22.851945   63944 logs.go:282] 0 containers: []
	W0211 03:16:22.851955   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:22.851963   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:22.852029   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:22.885968   63944 cri.go:89] found id: ""
	I0211 03:16:22.885992   63944 logs.go:282] 0 containers: []
	W0211 03:16:22.886001   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:22.886009   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:22.886062   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:22.921407   63944 cri.go:89] found id: ""
	I0211 03:16:22.921435   63944 logs.go:282] 0 containers: []
	W0211 03:16:22.921442   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:22.921448   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:22.921506   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:22.960269   63944 cri.go:89] found id: ""
	I0211 03:16:22.960295   63944 logs.go:282] 0 containers: []
	W0211 03:16:22.960316   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:22.960324   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:22.960396   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:23.001678   63944 cri.go:89] found id: ""
	I0211 03:16:23.001708   63944 logs.go:282] 0 containers: []
	W0211 03:16:23.001718   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:23.001725   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:23.001784   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:23.039519   63944 cri.go:89] found id: ""
	I0211 03:16:23.039550   63944 logs.go:282] 0 containers: []
	W0211 03:16:23.039560   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:23.039569   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:23.039633   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:23.086948   63944 cri.go:89] found id: ""
	I0211 03:16:23.086975   63944 logs.go:282] 0 containers: []
	W0211 03:16:23.086985   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:23.086992   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:23.087049   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:23.131380   63944 cri.go:89] found id: ""
	I0211 03:16:23.131421   63944 logs.go:282] 0 containers: []
	W0211 03:16:23.131431   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:23.131442   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:23.131456   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:23.197237   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:23.197266   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:23.211563   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:23.211593   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:23.287089   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:23.287121   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:23.287136   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:23.398710   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:23.398751   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:25.941680   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:25.956789   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:25.956858   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:25.994773   63944 cri.go:89] found id: ""
	I0211 03:16:25.994802   63944 logs.go:282] 0 containers: []
	W0211 03:16:25.994811   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:25.994817   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:25.994885   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:26.030099   63944 cri.go:89] found id: ""
	I0211 03:16:26.030127   63944 logs.go:282] 0 containers: []
	W0211 03:16:26.030136   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:26.030142   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:26.030191   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:26.066963   63944 cri.go:89] found id: ""
	I0211 03:16:26.066986   63944 logs.go:282] 0 containers: []
	W0211 03:16:26.066996   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:26.067003   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:26.067057   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:26.096994   63944 cri.go:89] found id: ""
	I0211 03:16:26.097022   63944 logs.go:282] 0 containers: []
	W0211 03:16:26.097033   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:26.097041   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:26.097096   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:26.129145   63944 cri.go:89] found id: ""
	I0211 03:16:26.129184   63944 logs.go:282] 0 containers: []
	W0211 03:16:26.129196   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:26.129204   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:26.129261   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:26.164727   63944 cri.go:89] found id: ""
	I0211 03:16:26.164754   63944 logs.go:282] 0 containers: []
	W0211 03:16:26.164762   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:26.164768   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:26.164816   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:26.200629   63944 cri.go:89] found id: ""
	I0211 03:16:26.200656   63944 logs.go:282] 0 containers: []
	W0211 03:16:26.200667   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:26.200674   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:26.200735   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:26.235180   63944 cri.go:89] found id: ""
	I0211 03:16:26.235208   63944 logs.go:282] 0 containers: []
	W0211 03:16:26.235218   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:26.235229   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:26.235243   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:26.273656   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:26.273689   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:26.337253   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:26.337287   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:26.356735   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:26.356775   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:26.447423   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:26.447442   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:26.447453   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:29.022571   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:29.036373   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:29.036437   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:29.069237   63944 cri.go:89] found id: ""
	I0211 03:16:29.069260   63944 logs.go:282] 0 containers: []
	W0211 03:16:29.069267   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:29.069272   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:29.069318   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:29.101128   63944 cri.go:89] found id: ""
	I0211 03:16:29.101155   63944 logs.go:282] 0 containers: []
	W0211 03:16:29.101165   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:29.101181   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:29.101244   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:29.133661   63944 cri.go:89] found id: ""
	I0211 03:16:29.133688   63944 logs.go:282] 0 containers: []
	W0211 03:16:29.133699   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:29.133707   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:29.133766   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:29.165248   63944 cri.go:89] found id: ""
	I0211 03:16:29.165280   63944 logs.go:282] 0 containers: []
	W0211 03:16:29.165291   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:29.165299   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:29.165353   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:29.196160   63944 cri.go:89] found id: ""
	I0211 03:16:29.196191   63944 logs.go:282] 0 containers: []
	W0211 03:16:29.196200   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:29.196208   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:29.196274   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:29.233485   63944 cri.go:89] found id: ""
	I0211 03:16:29.233509   63944 logs.go:282] 0 containers: []
	W0211 03:16:29.233516   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:29.233521   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:29.233567   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:29.264239   63944 cri.go:89] found id: ""
	I0211 03:16:29.264264   63944 logs.go:282] 0 containers: []
	W0211 03:16:29.264272   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:29.264279   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:29.264335   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:29.295470   63944 cri.go:89] found id: ""
	I0211 03:16:29.295498   63944 logs.go:282] 0 containers: []
	W0211 03:16:29.295505   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:29.295515   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:29.295528   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:29.345072   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:29.345108   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:29.358166   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:29.358194   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:29.425027   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:29.425048   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:29.425063   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:29.501801   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:29.501829   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:32.039615   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:32.059163   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:32.059242   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:32.112371   63944 cri.go:89] found id: ""
	I0211 03:16:32.112402   63944 logs.go:282] 0 containers: []
	W0211 03:16:32.112418   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:32.112426   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:32.112481   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:32.155538   63944 cri.go:89] found id: ""
	I0211 03:16:32.155572   63944 logs.go:282] 0 containers: []
	W0211 03:16:32.155585   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:32.155594   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:32.155668   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:32.202480   63944 cri.go:89] found id: ""
	I0211 03:16:32.202506   63944 logs.go:282] 0 containers: []
	W0211 03:16:32.202516   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:32.202530   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:32.202593   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:32.239379   63944 cri.go:89] found id: ""
	I0211 03:16:32.239405   63944 logs.go:282] 0 containers: []
	W0211 03:16:32.239416   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:32.239427   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:32.239478   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:32.281149   63944 cri.go:89] found id: ""
	I0211 03:16:32.281173   63944 logs.go:282] 0 containers: []
	W0211 03:16:32.281184   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:32.281191   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:32.281250   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:32.320202   63944 cri.go:89] found id: ""
	I0211 03:16:32.320234   63944 logs.go:282] 0 containers: []
	W0211 03:16:32.320244   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:32.320251   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:32.320306   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:32.363557   63944 cri.go:89] found id: ""
	I0211 03:16:32.363591   63944 logs.go:282] 0 containers: []
	W0211 03:16:32.363603   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:32.363618   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:32.363681   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:32.414229   63944 cri.go:89] found id: ""
	I0211 03:16:32.414256   63944 logs.go:282] 0 containers: []
	W0211 03:16:32.414267   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:32.414278   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:32.414292   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:32.467633   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:32.467670   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:32.483205   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:32.483236   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:32.560149   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:32.560176   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:32.560190   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:32.647990   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:32.648024   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:35.191000   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:35.207551   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:35.207620   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:35.249005   63944 cri.go:89] found id: ""
	I0211 03:16:35.249040   63944 logs.go:282] 0 containers: []
	W0211 03:16:35.249060   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:35.249068   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:35.249128   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:35.286923   63944 cri.go:89] found id: ""
	I0211 03:16:35.286956   63944 logs.go:282] 0 containers: []
	W0211 03:16:35.286968   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:35.286976   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:35.287035   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:35.322081   63944 cri.go:89] found id: ""
	I0211 03:16:35.322106   63944 logs.go:282] 0 containers: []
	W0211 03:16:35.322116   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:35.322127   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:35.322185   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:35.357180   63944 cri.go:89] found id: ""
	I0211 03:16:35.357205   63944 logs.go:282] 0 containers: []
	W0211 03:16:35.357216   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:35.357223   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:35.357282   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:35.389558   63944 cri.go:89] found id: ""
	I0211 03:16:35.389588   63944 logs.go:282] 0 containers: []
	W0211 03:16:35.389599   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:35.389607   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:35.389670   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:35.426578   63944 cri.go:89] found id: ""
	I0211 03:16:35.426607   63944 logs.go:282] 0 containers: []
	W0211 03:16:35.426617   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:35.426625   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:35.426686   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:35.460929   63944 cri.go:89] found id: ""
	I0211 03:16:35.460960   63944 logs.go:282] 0 containers: []
	W0211 03:16:35.460972   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:35.460980   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:35.461041   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:35.512344   63944 cri.go:89] found id: ""
	I0211 03:16:35.512375   63944 logs.go:282] 0 containers: []
	W0211 03:16:35.512385   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:35.512397   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:35.512414   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:35.566119   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:35.566166   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:35.614885   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:35.614918   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:35.631203   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:35.631231   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:35.708238   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:35.708260   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:35.708272   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:38.327149   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:38.343459   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:38.343537   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:38.389686   63944 cri.go:89] found id: ""
	I0211 03:16:38.389719   63944 logs.go:282] 0 containers: []
	W0211 03:16:38.389731   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:38.389739   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:38.389800   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:38.429605   63944 cri.go:89] found id: ""
	I0211 03:16:38.429635   63944 logs.go:282] 0 containers: []
	W0211 03:16:38.429644   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:38.429650   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:38.429712   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:38.467149   63944 cri.go:89] found id: ""
	I0211 03:16:38.467175   63944 logs.go:282] 0 containers: []
	W0211 03:16:38.467185   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:38.467193   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:38.467256   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:38.506160   63944 cri.go:89] found id: ""
	I0211 03:16:38.506190   63944 logs.go:282] 0 containers: []
	W0211 03:16:38.506200   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:38.506208   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:38.506273   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:38.553302   63944 cri.go:89] found id: ""
	I0211 03:16:38.553332   63944 logs.go:282] 0 containers: []
	W0211 03:16:38.553342   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:38.553349   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:38.553415   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:38.597096   63944 cri.go:89] found id: ""
	I0211 03:16:38.597130   63944 logs.go:282] 0 containers: []
	W0211 03:16:38.597140   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:38.597150   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:38.597205   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:38.636394   63944 cri.go:89] found id: ""
	I0211 03:16:38.636425   63944 logs.go:282] 0 containers: []
	W0211 03:16:38.636449   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:38.636456   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:38.636526   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:38.672659   63944 cri.go:89] found id: ""
	I0211 03:16:38.672686   63944 logs.go:282] 0 containers: []
	W0211 03:16:38.672696   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:38.672707   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:38.672720   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:38.757193   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:38.757229   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:38.802758   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:38.802789   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:38.857250   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:38.857292   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:38.872803   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:38.872832   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:38.945002   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:41.445287   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:41.463656   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:41.463736   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:41.510544   63944 cri.go:89] found id: ""
	I0211 03:16:41.510586   63944 logs.go:282] 0 containers: []
	W0211 03:16:41.510599   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:41.510607   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:41.510669   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:41.558094   63944 cri.go:89] found id: ""
	I0211 03:16:41.558127   63944 logs.go:282] 0 containers: []
	W0211 03:16:41.558138   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:41.558146   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:41.558207   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:41.603692   63944 cri.go:89] found id: ""
	I0211 03:16:41.603722   63944 logs.go:282] 0 containers: []
	W0211 03:16:41.603732   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:41.603740   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:41.603794   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:41.651335   63944 cri.go:89] found id: ""
	I0211 03:16:41.651365   63944 logs.go:282] 0 containers: []
	W0211 03:16:41.651376   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:41.651383   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:41.651452   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:41.700460   63944 cri.go:89] found id: ""
	I0211 03:16:41.700493   63944 logs.go:282] 0 containers: []
	W0211 03:16:41.700504   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:41.700511   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:41.700579   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:41.751655   63944 cri.go:89] found id: ""
	I0211 03:16:41.751691   63944 logs.go:282] 0 containers: []
	W0211 03:16:41.751704   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:41.751714   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:41.751779   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:41.798237   63944 cri.go:89] found id: ""
	I0211 03:16:41.798285   63944 logs.go:282] 0 containers: []
	W0211 03:16:41.798296   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:41.798304   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:41.798371   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:41.844131   63944 cri.go:89] found id: ""
	I0211 03:16:41.844166   63944 logs.go:282] 0 containers: []
	W0211 03:16:41.844179   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:41.844190   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:41.844213   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:41.883152   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:41.883195   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:41.949574   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:41.949615   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:41.964946   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:41.964978   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:42.040753   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:42.040784   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:42.040800   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:44.656648   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:44.669389   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:44.669445   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:44.700116   63944 cri.go:89] found id: ""
	I0211 03:16:44.700150   63944 logs.go:282] 0 containers: []
	W0211 03:16:44.700162   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:44.700169   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:44.700240   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:44.731025   63944 cri.go:89] found id: ""
	I0211 03:16:44.731057   63944 logs.go:282] 0 containers: []
	W0211 03:16:44.731069   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:44.731076   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:44.731129   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:44.764748   63944 cri.go:89] found id: ""
	I0211 03:16:44.764777   63944 logs.go:282] 0 containers: []
	W0211 03:16:44.764788   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:44.764795   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:44.764853   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:44.801650   63944 cri.go:89] found id: ""
	I0211 03:16:44.801673   63944 logs.go:282] 0 containers: []
	W0211 03:16:44.801681   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:44.801687   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:44.801736   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:44.838667   63944 cri.go:89] found id: ""
	I0211 03:16:44.838694   63944 logs.go:282] 0 containers: []
	W0211 03:16:44.838701   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:44.838707   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:44.838763   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:44.877544   63944 cri.go:89] found id: ""
	I0211 03:16:44.877569   63944 logs.go:282] 0 containers: []
	W0211 03:16:44.877580   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:44.877588   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:44.877649   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:44.909898   63944 cri.go:89] found id: ""
	I0211 03:16:44.909926   63944 logs.go:282] 0 containers: []
	W0211 03:16:44.909937   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:44.909944   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:44.910021   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:44.942224   63944 cri.go:89] found id: ""
	I0211 03:16:44.942254   63944 logs.go:282] 0 containers: []
	W0211 03:16:44.942264   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:44.942274   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:44.942286   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:45.027307   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:45.027346   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:45.065017   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:45.065051   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:45.112592   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:45.112622   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:45.125017   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:45.125041   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:45.194332   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:47.695001   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:47.707161   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:47.707231   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:47.738489   63944 cri.go:89] found id: ""
	I0211 03:16:47.738521   63944 logs.go:282] 0 containers: []
	W0211 03:16:47.738531   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:47.738540   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:47.738604   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:47.770624   63944 cri.go:89] found id: ""
	I0211 03:16:47.770653   63944 logs.go:282] 0 containers: []
	W0211 03:16:47.770665   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:47.770672   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:47.770736   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:47.802910   63944 cri.go:89] found id: ""
	I0211 03:16:47.802942   63944 logs.go:282] 0 containers: []
	W0211 03:16:47.802953   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:47.802961   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:47.803021   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:47.840259   63944 cri.go:89] found id: ""
	I0211 03:16:47.840290   63944 logs.go:282] 0 containers: []
	W0211 03:16:47.840300   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:47.840308   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:47.840368   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:47.871346   63944 cri.go:89] found id: ""
	I0211 03:16:47.871382   63944 logs.go:282] 0 containers: []
	W0211 03:16:47.871394   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:47.871404   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:47.871485   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:47.903301   63944 cri.go:89] found id: ""
	I0211 03:16:47.903333   63944 logs.go:282] 0 containers: []
	W0211 03:16:47.903344   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:47.903355   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:47.903403   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:47.941208   63944 cri.go:89] found id: ""
	I0211 03:16:47.941234   63944 logs.go:282] 0 containers: []
	W0211 03:16:47.941246   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:47.941254   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:47.941313   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:47.975125   63944 cri.go:89] found id: ""
	I0211 03:16:47.975154   63944 logs.go:282] 0 containers: []
	W0211 03:16:47.975164   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:47.975175   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:47.975188   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:48.052529   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:48.052563   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:48.087883   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:48.087909   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:48.137539   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:48.137566   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:48.149770   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:48.149798   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:48.219468   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:50.720354   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:50.732834   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:50.732887   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:50.770957   63944 cri.go:89] found id: ""
	I0211 03:16:50.770983   63944 logs.go:282] 0 containers: []
	W0211 03:16:50.770990   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:50.770996   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:50.771043   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:50.801794   63944 cri.go:89] found id: ""
	I0211 03:16:50.801823   63944 logs.go:282] 0 containers: []
	W0211 03:16:50.801833   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:50.801841   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:50.801899   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:50.837119   63944 cri.go:89] found id: ""
	I0211 03:16:50.837150   63944 logs.go:282] 0 containers: []
	W0211 03:16:50.837160   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:50.837168   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:50.837237   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:50.869383   63944 cri.go:89] found id: ""
	I0211 03:16:50.869424   63944 logs.go:282] 0 containers: []
	W0211 03:16:50.869437   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:50.869446   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:50.869506   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:50.902006   63944 cri.go:89] found id: ""
	I0211 03:16:50.902027   63944 logs.go:282] 0 containers: []
	W0211 03:16:50.902034   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:50.902040   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:50.902100   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:50.934315   63944 cri.go:89] found id: ""
	I0211 03:16:50.934362   63944 logs.go:282] 0 containers: []
	W0211 03:16:50.934375   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:50.934384   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:50.934463   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:50.969156   63944 cri.go:89] found id: ""
	I0211 03:16:50.969189   63944 logs.go:282] 0 containers: []
	W0211 03:16:50.969201   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:50.969209   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:50.969267   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:51.003364   63944 cri.go:89] found id: ""
	I0211 03:16:51.003387   63944 logs.go:282] 0 containers: []
	W0211 03:16:51.003395   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:51.003402   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:51.003412   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:51.016114   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:51.016138   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:51.078476   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:51.078498   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:51.078509   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:51.156082   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:51.156114   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:51.194359   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:51.194399   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:53.743034   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:53.755345   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:53.755400   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:53.787877   63944 cri.go:89] found id: ""
	I0211 03:16:53.787904   63944 logs.go:282] 0 containers: []
	W0211 03:16:53.787914   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:53.787920   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:53.787966   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:53.822673   63944 cri.go:89] found id: ""
	I0211 03:16:53.822696   63944 logs.go:282] 0 containers: []
	W0211 03:16:53.822703   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:53.822709   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:53.822766   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:53.862343   63944 cri.go:89] found id: ""
	I0211 03:16:53.862375   63944 logs.go:282] 0 containers: []
	W0211 03:16:53.862384   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:53.862390   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:53.862442   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:53.899420   63944 cri.go:89] found id: ""
	I0211 03:16:53.899453   63944 logs.go:282] 0 containers: []
	W0211 03:16:53.899461   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:53.899466   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:53.899528   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:53.935699   63944 cri.go:89] found id: ""
	I0211 03:16:53.935727   63944 logs.go:282] 0 containers: []
	W0211 03:16:53.935738   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:53.935746   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:53.935807   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:53.972212   63944 cri.go:89] found id: ""
	I0211 03:16:53.972241   63944 logs.go:282] 0 containers: []
	W0211 03:16:53.972251   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:53.972259   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:53.972317   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:54.007163   63944 cri.go:89] found id: ""
	I0211 03:16:54.007192   63944 logs.go:282] 0 containers: []
	W0211 03:16:54.007213   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:54.007220   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:54.007282   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:54.041135   63944 cri.go:89] found id: ""
	I0211 03:16:54.041158   63944 logs.go:282] 0 containers: []
	W0211 03:16:54.041166   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:54.041175   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:54.041189   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:54.116900   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:54.116937   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:54.153540   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:54.153574   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:54.205457   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:54.205493   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:54.218487   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:54.218520   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:54.285885   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:56.786919   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:56.798755   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:56.798831   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:56.830954   63944 cri.go:89] found id: ""
	I0211 03:16:56.830985   63944 logs.go:282] 0 containers: []
	W0211 03:16:56.830996   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:56.831002   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:56.831063   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:56.862799   63944 cri.go:89] found id: ""
	I0211 03:16:56.862825   63944 logs.go:282] 0 containers: []
	W0211 03:16:56.862835   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:56.862843   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:56.862915   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:56.898707   63944 cri.go:89] found id: ""
	I0211 03:16:56.898740   63944 logs.go:282] 0 containers: []
	W0211 03:16:56.898751   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:56.898758   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:56.898814   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:56.932328   63944 cri.go:89] found id: ""
	I0211 03:16:56.932352   63944 logs.go:282] 0 containers: []
	W0211 03:16:56.932364   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:56.932372   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:56.932430   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:16:56.968931   63944 cri.go:89] found id: ""
	I0211 03:16:56.968960   63944 logs.go:282] 0 containers: []
	W0211 03:16:56.968971   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:16:56.968979   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:16:56.969034   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:16:57.001427   63944 cri.go:89] found id: ""
	I0211 03:16:57.001459   63944 logs.go:282] 0 containers: []
	W0211 03:16:57.001469   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:16:57.001477   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:16:57.001527   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:16:57.034169   63944 cri.go:89] found id: ""
	I0211 03:16:57.034199   63944 logs.go:282] 0 containers: []
	W0211 03:16:57.034210   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:16:57.034216   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:16:57.034265   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:16:57.066646   63944 cri.go:89] found id: ""
	I0211 03:16:57.066671   63944 logs.go:282] 0 containers: []
	W0211 03:16:57.066682   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:16:57.066692   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:16:57.066708   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:16:57.124509   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:16:57.124545   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:16:57.138190   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:16:57.138216   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:16:57.204176   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:16:57.204196   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:16:57.204209   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:16:57.280209   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:16:57.280242   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:16:59.819000   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:16:59.833288   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:16:59.833367   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:16:59.866296   63944 cri.go:89] found id: ""
	I0211 03:16:59.866329   63944 logs.go:282] 0 containers: []
	W0211 03:16:59.866340   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:16:59.866347   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:16:59.866420   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:16:59.905222   63944 cri.go:89] found id: ""
	I0211 03:16:59.905250   63944 logs.go:282] 0 containers: []
	W0211 03:16:59.905260   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:16:59.905267   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:16:59.905331   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:16:59.941829   63944 cri.go:89] found id: ""
	I0211 03:16:59.941867   63944 logs.go:282] 0 containers: []
	W0211 03:16:59.941881   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:16:59.941891   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:16:59.941960   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:16:59.979171   63944 cri.go:89] found id: ""
	I0211 03:16:59.979202   63944 logs.go:282] 0 containers: []
	W0211 03:16:59.979213   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:16:59.979221   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:16:59.979280   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:00.027680   63944 cri.go:89] found id: ""
	I0211 03:17:00.027706   63944 logs.go:282] 0 containers: []
	W0211 03:17:00.027717   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:00.027723   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:00.027787   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:00.070832   63944 cri.go:89] found id: ""
	I0211 03:17:00.070863   63944 logs.go:282] 0 containers: []
	W0211 03:17:00.070889   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:00.070897   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:00.070975   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:00.109396   63944 cri.go:89] found id: ""
	I0211 03:17:00.109426   63944 logs.go:282] 0 containers: []
	W0211 03:17:00.109438   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:00.109445   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:00.109514   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:00.148398   63944 cri.go:89] found id: ""
	I0211 03:17:00.148431   63944 logs.go:282] 0 containers: []
	W0211 03:17:00.148442   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:00.148452   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:00.148467   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:00.250514   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:00.250556   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:00.299042   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:00.299067   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:00.363290   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:00.363327   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:00.380865   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:00.380901   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:00.493015   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:02.993313   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:03.011034   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:03.011113   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:03.057692   63944 cri.go:89] found id: ""
	I0211 03:17:03.057728   63944 logs.go:282] 0 containers: []
	W0211 03:17:03.057739   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:03.057747   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:03.057813   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:03.097749   63944 cri.go:89] found id: ""
	I0211 03:17:03.097775   63944 logs.go:282] 0 containers: []
	W0211 03:17:03.097782   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:03.097787   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:03.097846   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:03.133843   63944 cri.go:89] found id: ""
	I0211 03:17:03.133869   63944 logs.go:282] 0 containers: []
	W0211 03:17:03.133879   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:03.133888   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:03.133952   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:03.171938   63944 cri.go:89] found id: ""
	I0211 03:17:03.171968   63944 logs.go:282] 0 containers: []
	W0211 03:17:03.171978   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:03.171986   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:03.172047   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:03.205540   63944 cri.go:89] found id: ""
	I0211 03:17:03.205574   63944 logs.go:282] 0 containers: []
	W0211 03:17:03.205585   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:03.205593   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:03.205651   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:03.246774   63944 cri.go:89] found id: ""
	I0211 03:17:03.246810   63944 logs.go:282] 0 containers: []
	W0211 03:17:03.246821   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:03.246831   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:03.246906   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:03.280817   63944 cri.go:89] found id: ""
	I0211 03:17:03.280848   63944 logs.go:282] 0 containers: []
	W0211 03:17:03.280859   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:03.280866   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:03.280925   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:03.314634   63944 cri.go:89] found id: ""
	I0211 03:17:03.314657   63944 logs.go:282] 0 containers: []
	W0211 03:17:03.314667   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:03.314678   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:03.314692   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:03.382708   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:03.382738   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:03.382754   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:03.470189   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:03.470226   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:03.513102   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:03.513127   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:03.580333   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:03.580377   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:06.096980   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:06.109155   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:06.109211   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:06.141280   63944 cri.go:89] found id: ""
	I0211 03:17:06.141307   63944 logs.go:282] 0 containers: []
	W0211 03:17:06.141316   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:06.141322   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:06.141369   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:06.194633   63944 cri.go:89] found id: ""
	I0211 03:17:06.194663   63944 logs.go:282] 0 containers: []
	W0211 03:17:06.194671   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:06.194677   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:06.194735   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:06.229327   63944 cri.go:89] found id: ""
	I0211 03:17:06.229350   63944 logs.go:282] 0 containers: []
	W0211 03:17:06.229359   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:06.229371   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:06.229434   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:06.264896   63944 cri.go:89] found id: ""
	I0211 03:17:06.264925   63944 logs.go:282] 0 containers: []
	W0211 03:17:06.264932   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:06.264938   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:06.264986   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:06.297735   63944 cri.go:89] found id: ""
	I0211 03:17:06.297767   63944 logs.go:282] 0 containers: []
	W0211 03:17:06.297778   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:06.297786   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:06.297847   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:06.330390   63944 cri.go:89] found id: ""
	I0211 03:17:06.330427   63944 logs.go:282] 0 containers: []
	W0211 03:17:06.330436   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:06.330445   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:06.330509   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:06.366430   63944 cri.go:89] found id: ""
	I0211 03:17:06.366460   63944 logs.go:282] 0 containers: []
	W0211 03:17:06.366471   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:06.366479   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:06.366536   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:06.401688   63944 cri.go:89] found id: ""
	I0211 03:17:06.401716   63944 logs.go:282] 0 containers: []
	W0211 03:17:06.401725   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:06.401733   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:06.401745   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:06.479163   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:06.479191   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:06.522071   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:06.522097   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:06.574840   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:06.574893   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:06.589045   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:06.589083   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:06.661571   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:09.163054   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:09.177146   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:09.177233   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:09.219128   63944 cri.go:89] found id: ""
	I0211 03:17:09.219165   63944 logs.go:282] 0 containers: []
	W0211 03:17:09.219177   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:09.219185   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:09.219248   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:09.251693   63944 cri.go:89] found id: ""
	I0211 03:17:09.251724   63944 logs.go:282] 0 containers: []
	W0211 03:17:09.251736   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:09.251744   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:09.251808   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:09.285704   63944 cri.go:89] found id: ""
	I0211 03:17:09.285731   63944 logs.go:282] 0 containers: []
	W0211 03:17:09.285743   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:09.285750   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:09.285812   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:09.316260   63944 cri.go:89] found id: ""
	I0211 03:17:09.316293   63944 logs.go:282] 0 containers: []
	W0211 03:17:09.316304   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:09.316314   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:09.316379   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:09.350005   63944 cri.go:89] found id: ""
	I0211 03:17:09.350033   63944 logs.go:282] 0 containers: []
	W0211 03:17:09.350044   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:09.350051   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:09.350108   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:09.382289   63944 cri.go:89] found id: ""
	I0211 03:17:09.382321   63944 logs.go:282] 0 containers: []
	W0211 03:17:09.382333   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:09.382342   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:09.382403   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:09.417420   63944 cri.go:89] found id: ""
	I0211 03:17:09.417450   63944 logs.go:282] 0 containers: []
	W0211 03:17:09.417462   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:09.417470   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:09.417517   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:09.448648   63944 cri.go:89] found id: ""
	I0211 03:17:09.448677   63944 logs.go:282] 0 containers: []
	W0211 03:17:09.448689   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:09.448699   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:09.448712   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:09.526478   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:09.526514   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:09.568524   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:09.568549   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:09.635957   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:09.635996   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:09.651851   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:09.651894   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:09.718104   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:12.219011   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:12.232144   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:12.232224   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:12.264153   63944 cri.go:89] found id: ""
	I0211 03:17:12.264188   63944 logs.go:282] 0 containers: []
	W0211 03:17:12.264198   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:12.264207   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:12.264259   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:12.301028   63944 cri.go:89] found id: ""
	I0211 03:17:12.301060   63944 logs.go:282] 0 containers: []
	W0211 03:17:12.301071   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:12.301078   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:12.301155   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:12.343816   63944 cri.go:89] found id: ""
	I0211 03:17:12.343847   63944 logs.go:282] 0 containers: []
	W0211 03:17:12.343859   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:12.343869   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:12.343932   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:12.379431   63944 cri.go:89] found id: ""
	I0211 03:17:12.379463   63944 logs.go:282] 0 containers: []
	W0211 03:17:12.379474   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:12.379482   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:12.379545   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:12.420445   63944 cri.go:89] found id: ""
	I0211 03:17:12.420478   63944 logs.go:282] 0 containers: []
	W0211 03:17:12.420488   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:12.420496   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:12.420549   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:12.459055   63944 cri.go:89] found id: ""
	I0211 03:17:12.459087   63944 logs.go:282] 0 containers: []
	W0211 03:17:12.459098   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:12.459105   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:12.459164   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:12.497789   63944 cri.go:89] found id: ""
	I0211 03:17:12.497825   63944 logs.go:282] 0 containers: []
	W0211 03:17:12.497839   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:12.497848   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:12.497912   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:12.545229   63944 cri.go:89] found id: ""
	I0211 03:17:12.545316   63944 logs.go:282] 0 containers: []
	W0211 03:17:12.545353   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:12.545396   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:12.545414   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:12.601964   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:12.601999   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:12.617856   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:12.617882   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:12.727879   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:12.727906   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:12.727926   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:12.802663   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:12.802695   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:15.343018   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:15.355426   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:15.355516   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:15.392891   63944 cri.go:89] found id: ""
	I0211 03:17:15.392916   63944 logs.go:282] 0 containers: []
	W0211 03:17:15.392925   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:15.392931   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:15.392996   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:15.430207   63944 cri.go:89] found id: ""
	I0211 03:17:15.430235   63944 logs.go:282] 0 containers: []
	W0211 03:17:15.430246   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:15.430254   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:15.430306   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:15.466647   63944 cri.go:89] found id: ""
	I0211 03:17:15.466673   63944 logs.go:282] 0 containers: []
	W0211 03:17:15.466684   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:15.466691   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:15.466755   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:15.504601   63944 cri.go:89] found id: ""
	I0211 03:17:15.504631   63944 logs.go:282] 0 containers: []
	W0211 03:17:15.504642   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:15.504651   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:15.504711   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:15.546049   63944 cri.go:89] found id: ""
	I0211 03:17:15.546077   63944 logs.go:282] 0 containers: []
	W0211 03:17:15.546086   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:15.546093   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:15.546155   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:15.581902   63944 cri.go:89] found id: ""
	I0211 03:17:15.581927   63944 logs.go:282] 0 containers: []
	W0211 03:17:15.581937   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:15.581944   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:15.582001   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:15.619818   63944 cri.go:89] found id: ""
	I0211 03:17:15.619846   63944 logs.go:282] 0 containers: []
	W0211 03:17:15.619855   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:15.619863   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:15.619919   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:15.651608   63944 cri.go:89] found id: ""
	I0211 03:17:15.651638   63944 logs.go:282] 0 containers: []
	W0211 03:17:15.651648   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:15.651659   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:15.651673   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:15.663807   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:15.663836   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:15.732461   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:15.732494   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:15.732510   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:15.806627   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:15.806661   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:15.845318   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:15.845350   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:18.399723   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:18.416613   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:18.416672   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:18.486427   63944 cri.go:89] found id: ""
	I0211 03:17:18.486463   63944 logs.go:282] 0 containers: []
	W0211 03:17:18.486475   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:18.486484   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:18.486548   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:18.544955   63944 cri.go:89] found id: ""
	I0211 03:17:18.544985   63944 logs.go:282] 0 containers: []
	W0211 03:17:18.544995   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:18.545002   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:18.545063   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:18.582326   63944 cri.go:89] found id: ""
	I0211 03:17:18.582356   63944 logs.go:282] 0 containers: []
	W0211 03:17:18.582366   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:18.582374   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:18.582442   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:18.619370   63944 cri.go:89] found id: ""
	I0211 03:17:18.619402   63944 logs.go:282] 0 containers: []
	W0211 03:17:18.619420   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:18.619428   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:18.619492   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:18.657009   63944 cri.go:89] found id: ""
	I0211 03:17:18.657033   63944 logs.go:282] 0 containers: []
	W0211 03:17:18.657043   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:18.657050   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:18.657106   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:18.694410   63944 cri.go:89] found id: ""
	I0211 03:17:18.694438   63944 logs.go:282] 0 containers: []
	W0211 03:17:18.694449   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:18.694457   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:18.694517   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:18.728072   63944 cri.go:89] found id: ""
	I0211 03:17:18.728105   63944 logs.go:282] 0 containers: []
	W0211 03:17:18.728116   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:18.728124   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:18.728197   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:18.764340   63944 cri.go:89] found id: ""
	I0211 03:17:18.764372   63944 logs.go:282] 0 containers: []
	W0211 03:17:18.764382   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:18.764392   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:18.764406   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:18.778965   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:18.778984   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:18.847798   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:18.847832   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:18.847847   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:18.959773   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:18.959816   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:19.003251   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:19.003287   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:21.572090   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:21.585341   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:21.585410   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:21.621414   63944 cri.go:89] found id: ""
	I0211 03:17:21.621441   63944 logs.go:282] 0 containers: []
	W0211 03:17:21.621449   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:21.621454   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:21.621512   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:21.655720   63944 cri.go:89] found id: ""
	I0211 03:17:21.655749   63944 logs.go:282] 0 containers: []
	W0211 03:17:21.655757   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:21.655764   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:21.655820   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:21.693871   63944 cri.go:89] found id: ""
	I0211 03:17:21.693904   63944 logs.go:282] 0 containers: []
	W0211 03:17:21.693914   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:21.693923   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:21.693977   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:21.727672   63944 cri.go:89] found id: ""
	I0211 03:17:21.727697   63944 logs.go:282] 0 containers: []
	W0211 03:17:21.727706   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:21.727712   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:21.727762   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:21.770113   63944 cri.go:89] found id: ""
	I0211 03:17:21.770138   63944 logs.go:282] 0 containers: []
	W0211 03:17:21.770150   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:21.770157   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:21.770226   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:21.808143   63944 cri.go:89] found id: ""
	I0211 03:17:21.808190   63944 logs.go:282] 0 containers: []
	W0211 03:17:21.808201   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:21.808208   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:21.808269   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:21.845512   63944 cri.go:89] found id: ""
	I0211 03:17:21.845543   63944 logs.go:282] 0 containers: []
	W0211 03:17:21.845554   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:21.845562   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:21.845622   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:21.880209   63944 cri.go:89] found id: ""
	I0211 03:17:21.880241   63944 logs.go:282] 0 containers: []
	W0211 03:17:21.880252   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:21.880263   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:21.880276   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:21.933045   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:21.933078   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:21.947636   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:21.947662   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:22.016696   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:22.016718   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:22.016734   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:22.093435   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:22.093473   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:24.632449   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:24.644667   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:24.644724   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:24.680679   63944 cri.go:89] found id: ""
	I0211 03:17:24.680707   63944 logs.go:282] 0 containers: []
	W0211 03:17:24.680717   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:24.680723   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:24.680785   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:24.721822   63944 cri.go:89] found id: ""
	I0211 03:17:24.721847   63944 logs.go:282] 0 containers: []
	W0211 03:17:24.721856   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:24.721861   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:24.721913   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:24.765985   63944 cri.go:89] found id: ""
	I0211 03:17:24.766028   63944 logs.go:282] 0 containers: []
	W0211 03:17:24.766041   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:24.766049   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:24.766116   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:24.803698   63944 cri.go:89] found id: ""
	I0211 03:17:24.803733   63944 logs.go:282] 0 containers: []
	W0211 03:17:24.803745   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:24.803753   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:24.803819   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:24.842818   63944 cri.go:89] found id: ""
	I0211 03:17:24.842862   63944 logs.go:282] 0 containers: []
	W0211 03:17:24.842891   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:24.842901   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:24.842973   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:24.877458   63944 cri.go:89] found id: ""
	I0211 03:17:24.877490   63944 logs.go:282] 0 containers: []
	W0211 03:17:24.877511   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:24.877519   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:24.877595   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:24.915182   63944 cri.go:89] found id: ""
	I0211 03:17:24.915225   63944 logs.go:282] 0 containers: []
	W0211 03:17:24.915235   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:24.915243   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:24.915311   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:24.950193   63944 cri.go:89] found id: ""
	I0211 03:17:24.950232   63944 logs.go:282] 0 containers: []
	W0211 03:17:24.950244   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:24.950255   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:24.950270   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:25.001342   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:25.001379   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:25.015341   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:25.015367   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:25.087384   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:25.087414   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:25.087445   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:25.167915   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:25.167952   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:27.708514   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:27.720924   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:27.720986   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:27.758465   63944 cri.go:89] found id: ""
	I0211 03:17:27.758491   63944 logs.go:282] 0 containers: []
	W0211 03:17:27.758499   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:27.758504   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:27.758552   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:27.788963   63944 cri.go:89] found id: ""
	I0211 03:17:27.788990   63944 logs.go:282] 0 containers: []
	W0211 03:17:27.788997   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:27.789002   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:27.789055   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:27.818717   63944 cri.go:89] found id: ""
	I0211 03:17:27.818742   63944 logs.go:282] 0 containers: []
	W0211 03:17:27.818748   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:27.818754   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:27.818798   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:27.849434   63944 cri.go:89] found id: ""
	I0211 03:17:27.849459   63944 logs.go:282] 0 containers: []
	W0211 03:17:27.849467   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:27.849472   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:27.849534   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:27.880093   63944 cri.go:89] found id: ""
	I0211 03:17:27.880122   63944 logs.go:282] 0 containers: []
	W0211 03:17:27.880132   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:27.880139   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:27.880200   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:27.916810   63944 cri.go:89] found id: ""
	I0211 03:17:27.916840   63944 logs.go:282] 0 containers: []
	W0211 03:17:27.916851   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:27.916859   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:27.916919   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:27.952491   63944 cri.go:89] found id: ""
	I0211 03:17:27.952516   63944 logs.go:282] 0 containers: []
	W0211 03:17:27.952525   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:27.952536   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:27.952590   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:27.986952   63944 cri.go:89] found id: ""
	I0211 03:17:27.986979   63944 logs.go:282] 0 containers: []
	W0211 03:17:27.986990   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:27.987002   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:27.987015   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:28.034808   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:28.034835   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:28.047731   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:28.047756   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:28.110546   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:28.110570   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:28.110587   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:28.182353   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:28.182386   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:30.720513   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:30.736211   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:30.736265   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:30.773298   63944 cri.go:89] found id: ""
	I0211 03:17:30.773326   63944 logs.go:282] 0 containers: []
	W0211 03:17:30.773336   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:30.773344   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:30.773391   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:30.807870   63944 cri.go:89] found id: ""
	I0211 03:17:30.807900   63944 logs.go:282] 0 containers: []
	W0211 03:17:30.807912   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:30.807919   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:30.807975   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:30.858425   63944 cri.go:89] found id: ""
	I0211 03:17:30.858449   63944 logs.go:282] 0 containers: []
	W0211 03:17:30.858457   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:30.858463   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:30.858523   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:30.893173   63944 cri.go:89] found id: ""
	I0211 03:17:30.893202   63944 logs.go:282] 0 containers: []
	W0211 03:17:30.893214   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:30.893222   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:30.893282   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:30.927404   63944 cri.go:89] found id: ""
	I0211 03:17:30.927447   63944 logs.go:282] 0 containers: []
	W0211 03:17:30.927458   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:30.927465   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:30.927522   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:30.963623   63944 cri.go:89] found id: ""
	I0211 03:17:30.963652   63944 logs.go:282] 0 containers: []
	W0211 03:17:30.963663   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:30.963670   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:30.963740   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:30.999623   63944 cri.go:89] found id: ""
	I0211 03:17:30.999650   63944 logs.go:282] 0 containers: []
	W0211 03:17:30.999660   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:30.999668   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:30.999726   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:31.033119   63944 cri.go:89] found id: ""
	I0211 03:17:31.033145   63944 logs.go:282] 0 containers: []
	W0211 03:17:31.033153   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:31.033162   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:31.033175   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:31.100889   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:31.100923   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:31.116354   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:31.116379   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:31.191279   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:31.191300   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:31.191315   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:31.276305   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:31.276344   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:33.820054   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:33.833941   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:33.834005   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:33.873439   63944 cri.go:89] found id: ""
	I0211 03:17:33.873474   63944 logs.go:282] 0 containers: []
	W0211 03:17:33.873486   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:33.873494   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:33.873566   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:33.914840   63944 cri.go:89] found id: ""
	I0211 03:17:33.914892   63944 logs.go:282] 0 containers: []
	W0211 03:17:33.914905   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:33.914914   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:33.914980   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:33.949028   63944 cri.go:89] found id: ""
	I0211 03:17:33.949055   63944 logs.go:282] 0 containers: []
	W0211 03:17:33.949065   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:33.949072   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:33.949153   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:33.984722   63944 cri.go:89] found id: ""
	I0211 03:17:33.984762   63944 logs.go:282] 0 containers: []
	W0211 03:17:33.984771   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:33.984777   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:33.984833   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:34.021819   63944 cri.go:89] found id: ""
	I0211 03:17:34.021848   63944 logs.go:282] 0 containers: []
	W0211 03:17:34.021858   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:34.021867   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:34.021929   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:34.056927   63944 cri.go:89] found id: ""
	I0211 03:17:34.056955   63944 logs.go:282] 0 containers: []
	W0211 03:17:34.056966   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:34.056974   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:34.057032   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:34.094425   63944 cri.go:89] found id: ""
	I0211 03:17:34.094453   63944 logs.go:282] 0 containers: []
	W0211 03:17:34.094463   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:34.094470   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:34.094531   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:34.127487   63944 cri.go:89] found id: ""
	I0211 03:17:34.127514   63944 logs.go:282] 0 containers: []
	W0211 03:17:34.127523   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:34.127533   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:34.127548   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:34.179369   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:34.179410   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:34.194060   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:34.194086   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:34.267089   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:34.267120   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:34.267135   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:34.346829   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:34.346864   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:36.889519   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:36.901978   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:36.902054   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:36.936480   63944 cri.go:89] found id: ""
	I0211 03:17:36.936508   63944 logs.go:282] 0 containers: []
	W0211 03:17:36.936518   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:36.936526   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:36.936583   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:36.969456   63944 cri.go:89] found id: ""
	I0211 03:17:36.969486   63944 logs.go:282] 0 containers: []
	W0211 03:17:36.969498   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:36.969505   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:36.969565   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:37.006546   63944 cri.go:89] found id: ""
	I0211 03:17:37.006572   63944 logs.go:282] 0 containers: []
	W0211 03:17:37.006581   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:37.006586   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:37.006663   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:37.048108   63944 cri.go:89] found id: ""
	I0211 03:17:37.048148   63944 logs.go:282] 0 containers: []
	W0211 03:17:37.048161   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:37.048173   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:37.048250   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:37.080320   63944 cri.go:89] found id: ""
	I0211 03:17:37.080354   63944 logs.go:282] 0 containers: []
	W0211 03:17:37.080366   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:37.080374   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:37.080437   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:37.117897   63944 cri.go:89] found id: ""
	I0211 03:17:37.117926   63944 logs.go:282] 0 containers: []
	W0211 03:17:37.117937   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:37.117945   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:37.118007   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:37.150157   63944 cri.go:89] found id: ""
	I0211 03:17:37.150183   63944 logs.go:282] 0 containers: []
	W0211 03:17:37.150191   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:37.150197   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:37.150255   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:37.183835   63944 cri.go:89] found id: ""
	I0211 03:17:37.183869   63944 logs.go:282] 0 containers: []
	W0211 03:17:37.183881   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:37.183893   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:37.183909   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:37.236725   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:37.236762   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:37.249486   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:37.249512   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:37.315662   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:37.315685   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:37.315699   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:37.405827   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:37.405860   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:39.947481   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:39.960816   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:39.960892   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:40.004992   63944 cri.go:89] found id: ""
	I0211 03:17:40.005024   63944 logs.go:282] 0 containers: []
	W0211 03:17:40.005036   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:40.005044   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:40.005105   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:40.038794   63944 cri.go:89] found id: ""
	I0211 03:17:40.038822   63944 logs.go:282] 0 containers: []
	W0211 03:17:40.038832   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:40.038839   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:40.038909   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:40.074442   63944 cri.go:89] found id: ""
	I0211 03:17:40.074471   63944 logs.go:282] 0 containers: []
	W0211 03:17:40.074480   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:40.074487   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:40.074547   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:40.108890   63944 cri.go:89] found id: ""
	I0211 03:17:40.108920   63944 logs.go:282] 0 containers: []
	W0211 03:17:40.108931   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:40.108938   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:40.109004   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:40.146103   63944 cri.go:89] found id: ""
	I0211 03:17:40.146133   63944 logs.go:282] 0 containers: []
	W0211 03:17:40.146144   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:40.146151   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:40.146209   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:40.188373   63944 cri.go:89] found id: ""
	I0211 03:17:40.188409   63944 logs.go:282] 0 containers: []
	W0211 03:17:40.188420   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:40.188429   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:40.188498   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:40.229705   63944 cri.go:89] found id: ""
	I0211 03:17:40.229734   63944 logs.go:282] 0 containers: []
	W0211 03:17:40.229744   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:40.229751   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:40.229808   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:40.262864   63944 cri.go:89] found id: ""
	I0211 03:17:40.262920   63944 logs.go:282] 0 containers: []
	W0211 03:17:40.262929   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:40.262939   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:40.262950   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:40.325893   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:40.325940   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:40.343845   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:40.343884   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:40.433974   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:40.434046   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:40.434071   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:40.512186   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:40.512227   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:43.063459   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:43.080830   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:43.080905   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:43.125436   63944 cri.go:89] found id: ""
	I0211 03:17:43.125469   63944 logs.go:282] 0 containers: []
	W0211 03:17:43.125488   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:43.125495   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:43.125558   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:43.168495   63944 cri.go:89] found id: ""
	I0211 03:17:43.168527   63944 logs.go:282] 0 containers: []
	W0211 03:17:43.168537   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:43.168544   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:43.168601   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:43.212661   63944 cri.go:89] found id: ""
	I0211 03:17:43.212692   63944 logs.go:282] 0 containers: []
	W0211 03:17:43.212702   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:43.212709   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:43.212767   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:43.257045   63944 cri.go:89] found id: ""
	I0211 03:17:43.257080   63944 logs.go:282] 0 containers: []
	W0211 03:17:43.257092   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:43.257101   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:43.257174   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:43.294376   63944 cri.go:89] found id: ""
	I0211 03:17:43.294402   63944 logs.go:282] 0 containers: []
	W0211 03:17:43.294413   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:43.294421   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:43.294481   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:43.327667   63944 cri.go:89] found id: ""
	I0211 03:17:43.327690   63944 logs.go:282] 0 containers: []
	W0211 03:17:43.327697   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:43.327708   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:43.327754   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:43.362984   63944 cri.go:89] found id: ""
	I0211 03:17:43.363013   63944 logs.go:282] 0 containers: []
	W0211 03:17:43.363024   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:43.363031   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:43.363082   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:43.399512   63944 cri.go:89] found id: ""
	I0211 03:17:43.399548   63944 logs.go:282] 0 containers: []
	W0211 03:17:43.399560   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:43.399571   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:43.399585   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:43.471580   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:43.471604   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:43.471618   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:43.570848   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:43.570898   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:43.610696   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:43.610731   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:43.662933   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:43.662970   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:46.175942   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:46.190536   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:46.190608   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:46.227780   63944 cri.go:89] found id: ""
	I0211 03:17:46.227810   63944 logs.go:282] 0 containers: []
	W0211 03:17:46.227821   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:46.227828   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:46.227893   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:46.264910   63944 cri.go:89] found id: ""
	I0211 03:17:46.264936   63944 logs.go:282] 0 containers: []
	W0211 03:17:46.264944   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:46.264949   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:46.264995   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:46.308275   63944 cri.go:89] found id: ""
	I0211 03:17:46.308300   63944 logs.go:282] 0 containers: []
	W0211 03:17:46.308308   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:46.308314   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:46.308357   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:46.346983   63944 cri.go:89] found id: ""
	I0211 03:17:46.347013   63944 logs.go:282] 0 containers: []
	W0211 03:17:46.347023   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:46.347030   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:46.347094   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:46.382597   63944 cri.go:89] found id: ""
	I0211 03:17:46.382625   63944 logs.go:282] 0 containers: []
	W0211 03:17:46.382636   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:46.382643   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:46.382702   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:46.415576   63944 cri.go:89] found id: ""
	I0211 03:17:46.415601   63944 logs.go:282] 0 containers: []
	W0211 03:17:46.415608   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:46.415614   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:46.415668   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:46.450593   63944 cri.go:89] found id: ""
	I0211 03:17:46.450619   63944 logs.go:282] 0 containers: []
	W0211 03:17:46.450630   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:46.450638   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:46.450702   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:46.485661   63944 cri.go:89] found id: ""
	I0211 03:17:46.485704   63944 logs.go:282] 0 containers: []
	W0211 03:17:46.485715   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:46.485725   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:46.485738   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:46.560788   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:46.560819   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:46.597236   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:46.597267   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:46.647275   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:46.647305   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:46.667209   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:46.667247   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:46.740952   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:49.242676   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:49.254865   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:49.254949   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:49.291333   63944 cri.go:89] found id: ""
	I0211 03:17:49.291360   63944 logs.go:282] 0 containers: []
	W0211 03:17:49.291371   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:49.291378   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:49.291505   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:49.324947   63944 cri.go:89] found id: ""
	I0211 03:17:49.324977   63944 logs.go:282] 0 containers: []
	W0211 03:17:49.324989   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:49.324995   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:49.325048   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:49.357075   63944 cri.go:89] found id: ""
	I0211 03:17:49.357103   63944 logs.go:282] 0 containers: []
	W0211 03:17:49.357130   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:49.357138   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:49.357189   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:49.388780   63944 cri.go:89] found id: ""
	I0211 03:17:49.388811   63944 logs.go:282] 0 containers: []
	W0211 03:17:49.388821   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:49.388828   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:49.388895   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:49.420974   63944 cri.go:89] found id: ""
	I0211 03:17:49.421006   63944 logs.go:282] 0 containers: []
	W0211 03:17:49.421018   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:49.421025   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:49.421083   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:49.453074   63944 cri.go:89] found id: ""
	I0211 03:17:49.453104   63944 logs.go:282] 0 containers: []
	W0211 03:17:49.453115   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:49.453122   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:49.453186   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:49.485659   63944 cri.go:89] found id: ""
	I0211 03:17:49.485689   63944 logs.go:282] 0 containers: []
	W0211 03:17:49.485700   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:49.485707   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:49.485761   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:49.520921   63944 cri.go:89] found id: ""
	I0211 03:17:49.520948   63944 logs.go:282] 0 containers: []
	W0211 03:17:49.520959   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:49.520970   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:49.520985   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:49.570508   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:49.570537   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:49.583343   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:49.583370   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:49.654995   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:49.655017   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:49.655029   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:49.730207   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:49.730248   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:52.269047   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:52.289329   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:52.289409   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:52.328706   63944 cri.go:89] found id: ""
	I0211 03:17:52.328731   63944 logs.go:282] 0 containers: []
	W0211 03:17:52.328738   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:52.328744   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:52.328790   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:52.363022   63944 cri.go:89] found id: ""
	I0211 03:17:52.363049   63944 logs.go:282] 0 containers: []
	W0211 03:17:52.363060   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:52.363067   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:52.363127   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:52.408585   63944 cri.go:89] found id: ""
	I0211 03:17:52.408610   63944 logs.go:282] 0 containers: []
	W0211 03:17:52.408620   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:52.408627   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:52.408676   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:52.458502   63944 cri.go:89] found id: ""
	I0211 03:17:52.458525   63944 logs.go:282] 0 containers: []
	W0211 03:17:52.458532   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:52.458538   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:52.458593   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:52.491995   63944 cri.go:89] found id: ""
	I0211 03:17:52.492026   63944 logs.go:282] 0 containers: []
	W0211 03:17:52.492035   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:52.492043   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:52.492102   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:52.527918   63944 cri.go:89] found id: ""
	I0211 03:17:52.527946   63944 logs.go:282] 0 containers: []
	W0211 03:17:52.527955   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:52.527960   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:52.528009   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:52.565225   63944 cri.go:89] found id: ""
	I0211 03:17:52.565262   63944 logs.go:282] 0 containers: []
	W0211 03:17:52.565272   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:52.565283   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:52.565348   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:52.604514   63944 cri.go:89] found id: ""
	I0211 03:17:52.604541   63944 logs.go:282] 0 containers: []
	W0211 03:17:52.604552   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:52.604562   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:52.604581   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:52.681138   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:52.681181   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:52.699399   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:52.699443   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:52.789348   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:52.789370   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:52.789383   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:52.884038   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:52.884079   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:55.438237   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:55.452871   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:55.452942   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:55.491576   63944 cri.go:89] found id: ""
	I0211 03:17:55.491602   63944 logs.go:282] 0 containers: []
	W0211 03:17:55.491609   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:55.491614   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:55.491662   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:55.535130   63944 cri.go:89] found id: ""
	I0211 03:17:55.535162   63944 logs.go:282] 0 containers: []
	W0211 03:17:55.535183   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:55.535191   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:55.535252   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:55.578437   63944 cri.go:89] found id: ""
	I0211 03:17:55.578468   63944 logs.go:282] 0 containers: []
	W0211 03:17:55.578479   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:55.578486   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:55.578554   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:55.621917   63944 cri.go:89] found id: ""
	I0211 03:17:55.621949   63944 logs.go:282] 0 containers: []
	W0211 03:17:55.621961   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:55.621970   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:55.622027   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:55.660024   63944 cri.go:89] found id: ""
	I0211 03:17:55.660048   63944 logs.go:282] 0 containers: []
	W0211 03:17:55.660056   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:55.660061   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:55.660126   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:55.695277   63944 cri.go:89] found id: ""
	I0211 03:17:55.695311   63944 logs.go:282] 0 containers: []
	W0211 03:17:55.695322   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:55.695329   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:55.695378   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:55.730530   63944 cri.go:89] found id: ""
	I0211 03:17:55.730558   63944 logs.go:282] 0 containers: []
	W0211 03:17:55.730565   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:55.730571   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:55.730618   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:55.770626   63944 cri.go:89] found id: ""
	I0211 03:17:55.770654   63944 logs.go:282] 0 containers: []
	W0211 03:17:55.770664   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:55.770675   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:55.770689   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:55.821304   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:55.821339   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:55.834722   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:55.834750   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:55.915498   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:55.915530   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:55.915547   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:56.000246   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:56.000289   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:17:58.541106   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:17:58.557917   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:17:58.558016   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:17:58.596318   63944 cri.go:89] found id: ""
	I0211 03:17:58.596355   63944 logs.go:282] 0 containers: []
	W0211 03:17:58.596367   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:17:58.596376   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:17:58.596439   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:17:58.629513   63944 cri.go:89] found id: ""
	I0211 03:17:58.629552   63944 logs.go:282] 0 containers: []
	W0211 03:17:58.629562   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:17:58.629568   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:17:58.629632   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:17:58.666206   63944 cri.go:89] found id: ""
	I0211 03:17:58.666241   63944 logs.go:282] 0 containers: []
	W0211 03:17:58.666252   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:17:58.666259   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:17:58.666318   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:17:58.704443   63944 cri.go:89] found id: ""
	I0211 03:17:58.704472   63944 logs.go:282] 0 containers: []
	W0211 03:17:58.704479   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:17:58.704485   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:17:58.704542   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:17:58.743134   63944 cri.go:89] found id: ""
	I0211 03:17:58.743169   63944 logs.go:282] 0 containers: []
	W0211 03:17:58.743180   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:17:58.743187   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:17:58.743258   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:17:58.778426   63944 cri.go:89] found id: ""
	I0211 03:17:58.778451   63944 logs.go:282] 0 containers: []
	W0211 03:17:58.778461   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:17:58.778469   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:17:58.778530   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:17:58.823509   63944 cri.go:89] found id: ""
	I0211 03:17:58.823552   63944 logs.go:282] 0 containers: []
	W0211 03:17:58.823565   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:17:58.823574   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:17:58.823637   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:17:58.877837   63944 cri.go:89] found id: ""
	I0211 03:17:58.877875   63944 logs.go:282] 0 containers: []
	W0211 03:17:58.877888   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:17:58.877901   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:17:58.877915   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:17:58.942802   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:17:58.942851   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:17:58.967362   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:17:58.967393   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:17:59.033026   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:17:59.033052   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:17:59.033066   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:17:59.106340   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:17:59.106377   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:18:01.646147   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:18:01.659149   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:18:01.659245   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:18:01.696857   63944 cri.go:89] found id: ""
	I0211 03:18:01.696888   63944 logs.go:282] 0 containers: []
	W0211 03:18:01.696900   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:18:01.696908   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:18:01.696971   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:18:01.733570   63944 cri.go:89] found id: ""
	I0211 03:18:01.733597   63944 logs.go:282] 0 containers: []
	W0211 03:18:01.733604   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:18:01.733609   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:18:01.733657   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:18:01.769124   63944 cri.go:89] found id: ""
	I0211 03:18:01.769153   63944 logs.go:282] 0 containers: []
	W0211 03:18:01.769172   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:18:01.769180   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:18:01.769236   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:18:01.806993   63944 cri.go:89] found id: ""
	I0211 03:18:01.807020   63944 logs.go:282] 0 containers: []
	W0211 03:18:01.807030   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:18:01.807038   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:18:01.807094   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:18:01.840436   63944 cri.go:89] found id: ""
	I0211 03:18:01.840471   63944 logs.go:282] 0 containers: []
	W0211 03:18:01.840482   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:18:01.840489   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:18:01.840549   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:18:01.875241   63944 cri.go:89] found id: ""
	I0211 03:18:01.875279   63944 logs.go:282] 0 containers: []
	W0211 03:18:01.875291   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:18:01.875299   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:18:01.875355   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:18:01.909696   63944 cri.go:89] found id: ""
	I0211 03:18:01.909725   63944 logs.go:282] 0 containers: []
	W0211 03:18:01.909737   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:18:01.909744   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:18:01.909809   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:18:01.943677   63944 cri.go:89] found id: ""
	I0211 03:18:01.943709   63944 logs.go:282] 0 containers: []
	W0211 03:18:01.943720   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:18:01.943732   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:18:01.943746   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:18:01.986536   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:18:01.986570   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:18:02.056790   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:18:02.056822   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:18:02.073425   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:18:02.073458   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:18:02.152182   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:18:02.152208   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:18:02.152224   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:18:04.732514   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:18:04.751902   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:18:04.751969   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:18:04.799121   63944 cri.go:89] found id: ""
	I0211 03:18:04.799145   63944 logs.go:282] 0 containers: []
	W0211 03:18:04.799153   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:18:04.799159   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:18:04.799211   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:18:04.835901   63944 cri.go:89] found id: ""
	I0211 03:18:04.835931   63944 logs.go:282] 0 containers: []
	W0211 03:18:04.835940   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:18:04.835948   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:18:04.836004   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:18:04.872614   63944 cri.go:89] found id: ""
	I0211 03:18:04.872641   63944 logs.go:282] 0 containers: []
	W0211 03:18:04.872651   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:18:04.872658   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:18:04.872720   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:18:04.920162   63944 cri.go:89] found id: ""
	I0211 03:18:04.920194   63944 logs.go:282] 0 containers: []
	W0211 03:18:04.920202   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:18:04.920207   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:18:04.920253   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:18:04.962117   63944 cri.go:89] found id: ""
	I0211 03:18:04.962151   63944 logs.go:282] 0 containers: []
	W0211 03:18:04.962163   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:18:04.962169   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:18:04.962234   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:18:05.007353   63944 cri.go:89] found id: ""
	I0211 03:18:05.007388   63944 logs.go:282] 0 containers: []
	W0211 03:18:05.007400   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:18:05.007409   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:18:05.007478   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:18:05.046181   63944 cri.go:89] found id: ""
	I0211 03:18:05.046210   63944 logs.go:282] 0 containers: []
	W0211 03:18:05.046221   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:18:05.046233   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:18:05.046290   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:18:05.092552   63944 cri.go:89] found id: ""
	I0211 03:18:05.092582   63944 logs.go:282] 0 containers: []
	W0211 03:18:05.092593   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:18:05.092604   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:18:05.092617   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:18:05.106739   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:18:05.106778   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:18:05.180240   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:18:05.180266   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:18:05.180278   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:18:05.278736   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:18:05.278784   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:18:05.332038   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:18:05.332063   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:18:07.894334   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:18:07.909486   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:18:07.909565   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:18:07.953122   63944 cri.go:89] found id: ""
	I0211 03:18:07.953152   63944 logs.go:282] 0 containers: []
	W0211 03:18:07.953163   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:18:07.953178   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:18:07.953229   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:18:07.993728   63944 cri.go:89] found id: ""
	I0211 03:18:07.993755   63944 logs.go:282] 0 containers: []
	W0211 03:18:07.993769   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:18:07.993777   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:18:07.993837   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:18:08.029630   63944 cri.go:89] found id: ""
	I0211 03:18:08.029656   63944 logs.go:282] 0 containers: []
	W0211 03:18:08.029667   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:18:08.029674   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:18:08.029732   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:18:08.067884   63944 cri.go:89] found id: ""
	I0211 03:18:08.067907   63944 logs.go:282] 0 containers: []
	W0211 03:18:08.067916   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:18:08.067923   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:18:08.067975   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:18:08.108907   63944 cri.go:89] found id: ""
	I0211 03:18:08.108929   63944 logs.go:282] 0 containers: []
	W0211 03:18:08.108938   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:18:08.108945   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:18:08.108994   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:18:08.144046   63944 cri.go:89] found id: ""
	I0211 03:18:08.144070   63944 logs.go:282] 0 containers: []
	W0211 03:18:08.144081   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:18:08.144091   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:18:08.144162   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:18:08.178504   63944 cri.go:89] found id: ""
	I0211 03:18:08.178532   63944 logs.go:282] 0 containers: []
	W0211 03:18:08.178540   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:18:08.178547   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:18:08.178595   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:18:08.210139   63944 cri.go:89] found id: ""
	I0211 03:18:08.210178   63944 logs.go:282] 0 containers: []
	W0211 03:18:08.210216   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:18:08.210229   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:18:08.210248   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:18:08.268616   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:18:08.268658   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:18:08.283384   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:18:08.283414   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:18:08.361556   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:18:08.361581   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:18:08.361594   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:18:08.440016   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:18:08.440051   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:18:10.978746   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:18:10.993238   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:18:10.993303   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:18:11.045963   63944 cri.go:89] found id: ""
	I0211 03:18:11.045991   63944 logs.go:282] 0 containers: []
	W0211 03:18:11.046002   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:18:11.046009   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:18:11.046064   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:18:11.080039   63944 cri.go:89] found id: ""
	I0211 03:18:11.080066   63944 logs.go:282] 0 containers: []
	W0211 03:18:11.080077   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:18:11.080085   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:18:11.080140   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:18:11.114992   63944 cri.go:89] found id: ""
	I0211 03:18:11.115020   63944 logs.go:282] 0 containers: []
	W0211 03:18:11.115029   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:18:11.115037   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:18:11.115094   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:18:11.146498   63944 cri.go:89] found id: ""
	I0211 03:18:11.146532   63944 logs.go:282] 0 containers: []
	W0211 03:18:11.146542   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:18:11.146549   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:18:11.146612   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:18:11.178334   63944 cri.go:89] found id: ""
	I0211 03:18:11.178359   63944 logs.go:282] 0 containers: []
	W0211 03:18:11.178367   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:18:11.178375   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:18:11.178441   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:18:11.210710   63944 cri.go:89] found id: ""
	I0211 03:18:11.210739   63944 logs.go:282] 0 containers: []
	W0211 03:18:11.210750   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:18:11.210758   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:18:11.210818   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:18:11.244751   63944 cri.go:89] found id: ""
	I0211 03:18:11.244777   63944 logs.go:282] 0 containers: []
	W0211 03:18:11.244789   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:18:11.244796   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:18:11.244850   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:18:11.283109   63944 cri.go:89] found id: ""
	I0211 03:18:11.283139   63944 logs.go:282] 0 containers: []
	W0211 03:18:11.283149   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:18:11.283160   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:18:11.283175   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:18:11.360273   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:18:11.360310   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:18:11.402343   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:18:11.402381   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:18:11.452940   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:18:11.452969   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:18:11.465320   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:18:11.465347   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:18:11.535608   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:18:14.035964   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:18:14.049064   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:18:14.049133   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:18:14.087496   63944 cri.go:89] found id: ""
	I0211 03:18:14.087530   63944 logs.go:282] 0 containers: []
	W0211 03:18:14.087544   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:18:14.087551   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:18:14.087614   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:18:14.119567   63944 cri.go:89] found id: ""
	I0211 03:18:14.119599   63944 logs.go:282] 0 containers: []
	W0211 03:18:14.119612   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:18:14.119619   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:18:14.119677   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:18:14.156713   63944 cri.go:89] found id: ""
	I0211 03:18:14.156737   63944 logs.go:282] 0 containers: []
	W0211 03:18:14.156745   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:18:14.156751   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:18:14.156805   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:18:14.196344   63944 cri.go:89] found id: ""
	I0211 03:18:14.196375   63944 logs.go:282] 0 containers: []
	W0211 03:18:14.196386   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:18:14.196395   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:18:14.196454   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:18:14.229389   63944 cri.go:89] found id: ""
	I0211 03:18:14.229419   63944 logs.go:282] 0 containers: []
	W0211 03:18:14.229431   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:18:14.229438   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:18:14.229485   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:18:14.261540   63944 cri.go:89] found id: ""
	I0211 03:18:14.261571   63944 logs.go:282] 0 containers: []
	W0211 03:18:14.261582   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:18:14.261590   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:18:14.261656   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:18:14.296446   63944 cri.go:89] found id: ""
	I0211 03:18:14.296475   63944 logs.go:282] 0 containers: []
	W0211 03:18:14.296486   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:18:14.296494   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:18:14.296540   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:18:14.332359   63944 cri.go:89] found id: ""
	I0211 03:18:14.332390   63944 logs.go:282] 0 containers: []
	W0211 03:18:14.332402   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:18:14.332411   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:18:14.332425   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:18:14.414354   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:18:14.414388   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:18:14.464280   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:18:14.464317   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:18:14.514434   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:18:14.514466   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:18:14.528221   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:18:14.528254   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:18:14.602345   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:18:17.103032   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:18:17.117076   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:18:17.117156   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:18:17.156176   63944 cri.go:89] found id: ""
	I0211 03:18:17.156213   63944 logs.go:282] 0 containers: []
	W0211 03:18:17.156225   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:18:17.156236   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:18:17.156293   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:18:17.193246   63944 cri.go:89] found id: ""
	I0211 03:18:17.193276   63944 logs.go:282] 0 containers: []
	W0211 03:18:17.193285   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:18:17.193291   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:18:17.193339   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:18:17.228444   63944 cri.go:89] found id: ""
	I0211 03:18:17.228480   63944 logs.go:282] 0 containers: []
	W0211 03:18:17.228493   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:18:17.228504   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:18:17.228570   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:18:17.261323   63944 cri.go:89] found id: ""
	I0211 03:18:17.261361   63944 logs.go:282] 0 containers: []
	W0211 03:18:17.261373   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:18:17.261382   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:18:17.261457   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:18:17.295268   63944 cri.go:89] found id: ""
	I0211 03:18:17.295299   63944 logs.go:282] 0 containers: []
	W0211 03:18:17.295311   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:18:17.295319   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:18:17.295395   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:18:17.330259   63944 cri.go:89] found id: ""
	I0211 03:18:17.330289   63944 logs.go:282] 0 containers: []
	W0211 03:18:17.330301   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:18:17.330308   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:18:17.330376   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:18:17.362021   63944 cri.go:89] found id: ""
	I0211 03:18:17.362055   63944 logs.go:282] 0 containers: []
	W0211 03:18:17.362067   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:18:17.362075   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:18:17.362158   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:18:17.393687   63944 cri.go:89] found id: ""
	I0211 03:18:17.393721   63944 logs.go:282] 0 containers: []
	W0211 03:18:17.393731   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:18:17.393742   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:18:17.393755   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:18:17.441983   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:18:17.442012   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:18:17.454845   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:18:17.454890   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:18:17.522284   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:18:17.522310   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:18:17.522325   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:18:17.607670   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:18:17.607706   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:18:20.146661   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:18:20.159254   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:18:20.159323   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:18:20.191061   63944 cri.go:89] found id: ""
	I0211 03:18:20.191095   63944 logs.go:282] 0 containers: []
	W0211 03:18:20.191108   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:18:20.191117   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:18:20.191184   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:18:20.223156   63944 cri.go:89] found id: ""
	I0211 03:18:20.223190   63944 logs.go:282] 0 containers: []
	W0211 03:18:20.223203   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:18:20.223212   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:18:20.223273   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:18:20.256152   63944 cri.go:89] found id: ""
	I0211 03:18:20.256186   63944 logs.go:282] 0 containers: []
	W0211 03:18:20.256200   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:18:20.256210   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:18:20.256274   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:18:20.288561   63944 cri.go:89] found id: ""
	I0211 03:18:20.288594   63944 logs.go:282] 0 containers: []
	W0211 03:18:20.288607   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:18:20.288614   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:18:20.288667   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:18:20.326408   63944 cri.go:89] found id: ""
	I0211 03:18:20.326438   63944 logs.go:282] 0 containers: []
	W0211 03:18:20.326450   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:18:20.326457   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:18:20.326521   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:18:20.360176   63944 cri.go:89] found id: ""
	I0211 03:18:20.360206   63944 logs.go:282] 0 containers: []
	W0211 03:18:20.360217   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:18:20.360224   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:18:20.360289   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:18:20.392301   63944 cri.go:89] found id: ""
	I0211 03:18:20.392326   63944 logs.go:282] 0 containers: []
	W0211 03:18:20.392337   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:18:20.392345   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:18:20.392421   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:18:20.427542   63944 cri.go:89] found id: ""
	I0211 03:18:20.427568   63944 logs.go:282] 0 containers: []
	W0211 03:18:20.427579   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:18:20.427590   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:18:20.427604   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:18:20.483916   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:18:20.483944   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:18:20.497746   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:18:20.497776   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:18:20.562972   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:18:20.562994   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:18:20.563009   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:18:20.648913   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:18:20.648956   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:18:23.188678   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:18:23.202527   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:18:23.202591   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:18:23.240221   63944 cri.go:89] found id: ""
	I0211 03:18:23.240247   63944 logs.go:282] 0 containers: []
	W0211 03:18:23.240256   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:18:23.240261   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:18:23.240317   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:18:23.276645   63944 cri.go:89] found id: ""
	I0211 03:18:23.276693   63944 logs.go:282] 0 containers: []
	W0211 03:18:23.276706   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:18:23.276714   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:18:23.276772   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:18:23.313875   63944 cri.go:89] found id: ""
	I0211 03:18:23.313903   63944 logs.go:282] 0 containers: []
	W0211 03:18:23.313911   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:18:23.313917   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:18:23.314008   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:18:23.354643   63944 cri.go:89] found id: ""
	I0211 03:18:23.354685   63944 logs.go:282] 0 containers: []
	W0211 03:18:23.354697   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:18:23.354709   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:18:23.354775   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:18:23.391540   63944 cri.go:89] found id: ""
	I0211 03:18:23.391574   63944 logs.go:282] 0 containers: []
	W0211 03:18:23.391585   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:18:23.391592   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:18:23.391656   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:18:23.427611   63944 cri.go:89] found id: ""
	I0211 03:18:23.427641   63944 logs.go:282] 0 containers: []
	W0211 03:18:23.427653   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:18:23.427660   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:18:23.427716   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:18:23.474124   63944 cri.go:89] found id: ""
	I0211 03:18:23.474151   63944 logs.go:282] 0 containers: []
	W0211 03:18:23.474161   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:18:23.474169   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:18:23.474227   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:18:23.518784   63944 cri.go:89] found id: ""
	I0211 03:18:23.518862   63944 logs.go:282] 0 containers: []
	W0211 03:18:23.518902   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:18:23.518921   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:18:23.518937   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:18:23.575587   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:18:23.575618   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:18:23.590193   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:18:23.590218   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:18:23.667897   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:18:23.667928   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:18:23.667944   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:18:23.745593   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:18:23.745629   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:18:26.286297   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:18:26.298719   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:18:26.298775   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:18:26.333064   63944 cri.go:89] found id: ""
	I0211 03:18:26.333094   63944 logs.go:282] 0 containers: []
	W0211 03:18:26.333114   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:18:26.333121   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:18:26.333185   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:18:26.369581   63944 cri.go:89] found id: ""
	I0211 03:18:26.369614   63944 logs.go:282] 0 containers: []
	W0211 03:18:26.369628   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:18:26.369637   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:18:26.369704   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:18:26.401884   63944 cri.go:89] found id: ""
	I0211 03:18:26.401917   63944 logs.go:282] 0 containers: []
	W0211 03:18:26.401928   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:18:26.401936   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:18:26.402006   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:18:26.438382   63944 cri.go:89] found id: ""
	I0211 03:18:26.438413   63944 logs.go:282] 0 containers: []
	W0211 03:18:26.438425   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:18:26.438432   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:18:26.438513   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:18:26.486208   63944 cri.go:89] found id: ""
	I0211 03:18:26.486239   63944 logs.go:282] 0 containers: []
	W0211 03:18:26.486251   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:18:26.486258   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:18:26.486321   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:18:26.518523   63944 cri.go:89] found id: ""
	I0211 03:18:26.518552   63944 logs.go:282] 0 containers: []
	W0211 03:18:26.518563   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:18:26.518570   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:18:26.518625   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:18:26.553243   63944 cri.go:89] found id: ""
	I0211 03:18:26.553276   63944 logs.go:282] 0 containers: []
	W0211 03:18:26.553288   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:18:26.553305   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:18:26.553382   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:18:26.591761   63944 cri.go:89] found id: ""
	I0211 03:18:26.591795   63944 logs.go:282] 0 containers: []
	W0211 03:18:26.591806   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:18:26.591817   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:18:26.591830   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:18:26.669133   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:18:26.669165   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:18:26.708190   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:18:26.708223   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:18:26.756273   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:18:26.756303   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:18:26.768688   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:18:26.768710   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:18:26.839465   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:18:29.341133   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:18:29.353946   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:18:29.354004   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:18:29.384750   63944 cri.go:89] found id: ""
	I0211 03:18:29.384782   63944 logs.go:282] 0 containers: []
	W0211 03:18:29.384794   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:18:29.384803   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:18:29.384932   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:18:29.417349   63944 cri.go:89] found id: ""
	I0211 03:18:29.417377   63944 logs.go:282] 0 containers: []
	W0211 03:18:29.417387   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:18:29.417394   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:18:29.417464   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:18:29.455151   63944 cri.go:89] found id: ""
	I0211 03:18:29.455181   63944 logs.go:282] 0 containers: []
	W0211 03:18:29.455192   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:18:29.455199   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:18:29.455263   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:18:29.488837   63944 cri.go:89] found id: ""
	I0211 03:18:29.488863   63944 logs.go:282] 0 containers: []
	W0211 03:18:29.488871   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:18:29.488877   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:18:29.488931   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:18:29.521507   63944 cri.go:89] found id: ""
	I0211 03:18:29.521540   63944 logs.go:282] 0 containers: []
	W0211 03:18:29.521551   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:18:29.521558   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:18:29.521617   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:18:29.553493   63944 cri.go:89] found id: ""
	I0211 03:18:29.553520   63944 logs.go:282] 0 containers: []
	W0211 03:18:29.553530   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:18:29.553537   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:18:29.553582   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:18:29.588063   63944 cri.go:89] found id: ""
	I0211 03:18:29.588085   63944 logs.go:282] 0 containers: []
	W0211 03:18:29.588092   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:18:29.588098   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:18:29.588146   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:18:29.621020   63944 cri.go:89] found id: ""
	I0211 03:18:29.621044   63944 logs.go:282] 0 containers: []
	W0211 03:18:29.621056   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:18:29.621065   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:18:29.621078   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:18:29.656734   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:18:29.656778   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0211 03:18:29.711138   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:18:29.711170   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:18:29.724644   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:18:29.724666   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:18:29.803916   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:18:29.803940   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:18:29.803956   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:18:32.379227   63944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:18:32.391647   63944 kubeadm.go:597] duration metric: took 4m2.940086345s to restartPrimaryControlPlane
	W0211 03:18:32.391725   63944 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0211 03:18:32.391756   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0211 03:18:32.957776   63944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 03:18:32.971506   63944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 03:18:32.982631   63944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 03:18:32.991916   63944 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 03:18:32.991939   63944 kubeadm.go:157] found existing configuration files:
	
	I0211 03:18:32.991991   63944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 03:18:33.000682   63944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 03:18:33.000732   63944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 03:18:33.011154   63944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 03:18:33.019767   63944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 03:18:33.019831   63944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 03:18:33.029942   63944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 03:18:33.039510   63944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 03:18:33.039559   63944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 03:18:33.049419   63944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 03:18:33.059099   63944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 03:18:33.059170   63944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 03:18:33.068279   63944 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0211 03:18:33.136218   63944 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0211 03:18:33.136329   63944 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 03:18:33.271181   63944 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 03:18:33.271341   63944 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 03:18:33.271486   63944 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0211 03:18:33.443986   63944 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 03:18:33.446498   63944 out.go:235]   - Generating certificates and keys ...
	I0211 03:18:33.446591   63944 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 03:18:33.446656   63944 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 03:18:33.446747   63944 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0211 03:18:33.446832   63944 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0211 03:18:33.446930   63944 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0211 03:18:33.447008   63944 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0211 03:18:33.447089   63944 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0211 03:18:33.447167   63944 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0211 03:18:33.447256   63944 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0211 03:18:33.447347   63944 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0211 03:18:33.447410   63944 kubeadm.go:310] [certs] Using the existing "sa" key
	I0211 03:18:33.447496   63944 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 03:18:33.612429   63944 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 03:18:33.779070   63944 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 03:18:33.840160   63944 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 03:18:33.963801   63944 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 03:18:33.978115   63944 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 03:18:33.979201   63944 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 03:18:33.979273   63944 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 03:18:34.111158   63944 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 03:18:34.112791   63944 out.go:235]   - Booting up control plane ...
	I0211 03:18:34.112924   63944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 03:18:34.121285   63944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 03:18:34.123166   63944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 03:18:34.124194   63944 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 03:18:34.126453   63944 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0211 03:19:14.127800   63944 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0211 03:19:14.128418   63944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:19:14.128612   63944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:19:19.128969   63944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:19:19.129150   63944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:19:29.129586   63944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:19:29.129812   63944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:19:49.130801   63944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:19:49.131133   63944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:20:29.132728   63944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:20:29.133055   63944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:20:29.133082   63944 kubeadm.go:310] 
	I0211 03:20:29.133151   63944 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0211 03:20:29.133224   63944 kubeadm.go:310] 		timed out waiting for the condition
	I0211 03:20:29.133235   63944 kubeadm.go:310] 
	I0211 03:20:29.133289   63944 kubeadm.go:310] 	This error is likely caused by:
	I0211 03:20:29.133373   63944 kubeadm.go:310] 		- The kubelet is not running
	I0211 03:20:29.133536   63944 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0211 03:20:29.133555   63944 kubeadm.go:310] 
	I0211 03:20:29.133705   63944 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0211 03:20:29.133755   63944 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0211 03:20:29.133802   63944 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0211 03:20:29.133814   63944 kubeadm.go:310] 
	I0211 03:20:29.133989   63944 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0211 03:20:29.134117   63944 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0211 03:20:29.134128   63944 kubeadm.go:310] 
	I0211 03:20:29.134295   63944 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0211 03:20:29.134416   63944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0211 03:20:29.134536   63944 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0211 03:20:29.134675   63944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0211 03:20:29.134724   63944 kubeadm.go:310] 
	I0211 03:20:29.134894   63944 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 03:20:29.135025   63944 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0211 03:20:29.135201   63944 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0211 03:20:29.135286   63944 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0211 03:20:29.135330   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0211 03:20:29.612141   63944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 03:20:29.626901   63944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 03:20:29.636253   63944 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 03:20:29.636270   63944 kubeadm.go:157] found existing configuration files:
	
	I0211 03:20:29.636312   63944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 03:20:29.645179   63944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 03:20:29.645224   63944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 03:20:29.654432   63944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 03:20:29.663413   63944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 03:20:29.663457   63944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 03:20:29.673427   63944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 03:20:29.682055   63944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 03:20:29.682133   63944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 03:20:29.691302   63944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 03:20:29.700215   63944 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 03:20:29.700282   63944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 03:20:29.709542   63944 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0211 03:20:29.786063   63944 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0211 03:20:29.786128   63944 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 03:20:29.942704   63944 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 03:20:29.942897   63944 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 03:20:29.943054   63944 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0211 03:20:30.148278   63944 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 03:20:30.150268   63944 out.go:235]   - Generating certificates and keys ...
	I0211 03:20:30.150374   63944 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 03:20:30.150472   63944 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 03:20:30.150575   63944 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0211 03:20:30.150657   63944 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0211 03:20:30.150753   63944 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0211 03:20:30.150829   63944 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0211 03:20:30.151052   63944 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0211 03:20:30.151239   63944 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0211 03:20:30.151868   63944 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0211 03:20:30.152314   63944 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0211 03:20:30.152549   63944 kubeadm.go:310] [certs] Using the existing "sa" key
	I0211 03:20:30.152657   63944 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 03:20:30.431833   63944 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 03:20:30.564208   63944 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 03:20:30.637768   63944 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 03:20:30.932727   63944 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 03:20:30.960481   63944 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 03:20:30.962063   63944 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 03:20:30.962232   63944 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 03:20:31.152194   63944 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 03:20:31.153827   63944 out.go:235]   - Booting up control plane ...
	I0211 03:20:31.153956   63944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 03:20:31.171000   63944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 03:20:31.172447   63944 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 03:20:31.173527   63944 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 03:20:31.176764   63944 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0211 03:21:11.178772   63944 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0211 03:21:11.179269   63944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:21:11.179468   63944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:21:16.179745   63944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:21:16.179995   63944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:21:26.180568   63944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:21:26.180873   63944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:21:46.181403   63944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:21:46.181691   63944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:22:26.180696   63944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:22:26.180969   63944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:22:26.180985   63944 kubeadm.go:310] 
	I0211 03:22:26.181048   63944 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0211 03:22:26.181461   63944 kubeadm.go:310] 		timed out waiting for the condition
	I0211 03:22:26.181482   63944 kubeadm.go:310] 
	I0211 03:22:26.181527   63944 kubeadm.go:310] 	This error is likely caused by:
	I0211 03:22:26.181575   63944 kubeadm.go:310] 		- The kubelet is not running
	I0211 03:22:26.181710   63944 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0211 03:22:26.181722   63944 kubeadm.go:310] 
	I0211 03:22:26.181859   63944 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0211 03:22:26.181905   63944 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0211 03:22:26.181946   63944 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0211 03:22:26.181953   63944 kubeadm.go:310] 
	I0211 03:22:26.182044   63944 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0211 03:22:26.182117   63944 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0211 03:22:26.182125   63944 kubeadm.go:310] 
	I0211 03:22:26.182243   63944 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0211 03:22:26.182320   63944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0211 03:22:26.182404   63944 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0211 03:22:26.182485   63944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0211 03:22:26.182493   63944 kubeadm.go:310] 
	I0211 03:22:26.184968   63944 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 03:22:26.185102   63944 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0211 03:22:26.185192   63944 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0211 03:22:26.185257   63944 kubeadm.go:394] duration metric: took 7m56.786325849s to StartCluster
	I0211 03:22:26.185298   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:22:26.185359   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:22:26.243779   63944 cri.go:89] found id: ""
	I0211 03:22:26.243817   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.243829   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:22:26.243837   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:22:26.243898   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:22:26.293825   63944 cri.go:89] found id: ""
	I0211 03:22:26.293856   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.293867   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:22:26.293883   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:22:26.293946   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:22:26.347783   63944 cri.go:89] found id: ""
	I0211 03:22:26.347817   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.347827   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:22:26.347835   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:22:26.347902   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:22:26.408435   63944 cri.go:89] found id: ""
	I0211 03:22:26.408464   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.408474   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:22:26.408482   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:22:26.408538   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:22:26.460484   63944 cri.go:89] found id: ""
	I0211 03:22:26.460510   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.460519   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:22:26.460526   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:22:26.460585   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:22:26.518551   63944 cri.go:89] found id: ""
	I0211 03:22:26.518576   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.518586   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:22:26.518594   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:22:26.518652   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:22:26.581984   63944 cri.go:89] found id: ""
	I0211 03:22:26.582024   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.582035   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:22:26.582043   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:22:26.582105   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:22:26.641024   63944 cri.go:89] found id: ""
	I0211 03:22:26.641051   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.641061   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:22:26.641073   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:22:26.641091   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:22:26.667223   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:22:26.667258   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:22:26.781585   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:22:26.781614   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:22:26.781630   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:22:26.951278   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:22:26.951376   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:22:27.005666   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:22:27.005692   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0211 03:22:27.067504   63944 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0211 03:22:27.067561   63944 out.go:270] * 
	* 
	W0211 03:22:27.067626   63944 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0211 03:22:27.067652   63944 out.go:270] * 
	* 
	W0211 03:22:27.068876   63944 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0211 03:22:27.071907   63944 out.go:201] 
	W0211 03:22:27.073466   63944 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0211 03:22:27.073520   63944 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0211 03:22:27.073549   63944 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0211 03:22:27.074865   63944 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-244815 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 2 (260.126048ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-244815 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo cat                            | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo cat                            | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo cat                            | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo docker                         | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo cat                            | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo cat                            | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo cat                            | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo cat                            | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo                                | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo find                           | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p calico-649359 sudo crio                           | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p calico-649359                                     | calico-649359             | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC | 11 Feb 25 03:22 UTC |
	| start   | -p enable-default-cni-649359                         | enable-default-cni-649359 | jenkins | v1.35.0 | 11 Feb 25 03:22 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 03:22:14
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 03:22:14.007593   73602 out.go:345] Setting OutFile to fd 1 ...
	I0211 03:22:14.007759   73602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:22:14.007768   73602 out.go:358] Setting ErrFile to fd 2...
	I0211 03:22:14.007773   73602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:22:14.008007   73602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 03:22:14.008586   73602 out.go:352] Setting JSON to false
	I0211 03:22:14.009686   73602 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7485,"bootTime":1739236649,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 03:22:14.009786   73602 start.go:139] virtualization: kvm guest
	I0211 03:22:14.011450   73602 out.go:177] * [enable-default-cni-649359] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 03:22:14.013166   73602 notify.go:220] Checking for updates...
	I0211 03:22:14.013209   73602 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 03:22:14.014561   73602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 03:22:14.015860   73602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:22:14.017116   73602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:22:14.018313   73602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 03:22:14.019608   73602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 03:22:14.021437   73602 config.go:182] Loaded profile config "custom-flannel-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:22:14.021597   73602 config.go:182] Loaded profile config "default-k8s-diff-port-697681": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:22:14.021740   73602 config.go:182] Loaded profile config "old-k8s-version-244815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0211 03:22:14.021872   73602 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 03:22:14.060087   73602 out.go:177] * Using the kvm2 driver based on user configuration
	I0211 03:22:14.061260   73602 start.go:297] selected driver: kvm2
	I0211 03:22:14.061272   73602 start.go:901] validating driver "kvm2" against <nil>
	I0211 03:22:14.061283   73602 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 03:22:14.061931   73602 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:22:14.062004   73602 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20400-12456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0211 03:22:14.077954   73602 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0211 03:22:14.078024   73602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0211 03:22:14.078373   73602 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0211 03:22:14.078413   73602 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:22:14.078461   73602 cni.go:84] Creating CNI manager for "bridge"
	I0211 03:22:14.078469   73602 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0211 03:22:14.078595   73602 start.go:340] cluster config:
	{Name:enable-default-cni-649359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-default-cni-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:22:14.078742   73602 iso.go:125] acquiring lock: {Name:mkf866c6e52b4efa55cc59a9f329105471716f9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:22:14.080254   73602 out.go:177] * Starting "enable-default-cni-649359" primary control-plane node in "enable-default-cni-649359" cluster
	I0211 03:22:14.081695   73602 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 03:22:14.081741   73602 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0211 03:22:14.081748   73602 cache.go:56] Caching tarball of preloaded images
	I0211 03:22:14.081846   73602 preload.go:172] Found /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0211 03:22:14.081862   73602 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0211 03:22:14.081962   73602 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/config.json ...
	I0211 03:22:14.081987   73602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/config.json: {Name:mk8d8f5c8192c54473efb67bb25c3f1e720f16f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:22:14.082134   73602 start.go:360] acquireMachinesLock for enable-default-cni-649359: {Name:mk0cbf79bfabdb28d0a301765db34c154a72eff0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0211 03:22:14.082175   73602 start.go:364] duration metric: took 22.223µs to acquireMachinesLock for "enable-default-cni-649359"
	I0211 03:22:14.082190   73602 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-649359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:enable-de
fault-cni-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 03:22:14.082249   73602 start.go:125] createHost starting for "" (driver="kvm2")
	I0211 03:22:13.614026   65022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:22:14.114036   65022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:22:14.246055   65022 kubeadm.go:1113] duration metric: took 4.920219379s to wait for elevateKubeSystemPrivileges
	I0211 03:22:14.246084   65022 kubeadm.go:394] duration metric: took 5m37.144583035s to StartCluster
	I0211 03:22:14.246099   65022 settings.go:142] acquiring lock: {Name:mkf2645a714cc5873c434b18e1494d4128c48052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:22:14.246177   65022 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:22:14.247341   65022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/kubeconfig: {Name:mkd961d61f748b29ba3bb0ad55f8216d88f98444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:22:14.247597   65022 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.113 Port:8444 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 03:22:14.247639   65022 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0211 03:22:14.247734   65022 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-697681"
	I0211 03:22:14.247753   65022 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-697681"
	W0211 03:22:14.247760   65022 addons.go:247] addon storage-provisioner should already be in state true
	I0211 03:22:14.247788   65022 host.go:66] Checking if "default-k8s-diff-port-697681" exists ...
	I0211 03:22:14.247834   65022 config.go:182] Loaded profile config "default-k8s-diff-port-697681": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:22:14.247889   65022 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-697681"
	I0211 03:22:14.247903   65022 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-697681"
	I0211 03:22:14.248240   65022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:14.248278   65022 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-697681"
	I0211 03:22:14.248290   65022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:14.248310   65022 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-697681"
	I0211 03:22:14.248317   65022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:14.248323   65022 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-697681"
	W0211 03:22:14.248331   65022 addons.go:247] addon metrics-server should already be in state true
	I0211 03:22:14.248360   65022 host.go:66] Checking if "default-k8s-diff-port-697681" exists ...
	I0211 03:22:14.248296   65022 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-697681"
	W0211 03:22:14.248395   65022 addons.go:247] addon dashboard should already be in state true
	I0211 03:22:14.248434   65022 host.go:66] Checking if "default-k8s-diff-port-697681" exists ...
	I0211 03:22:14.248281   65022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:14.248778   65022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:14.248826   65022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:14.248904   65022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:14.248936   65022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:14.249533   65022 out.go:177] * Verifying Kubernetes components...
	I0211 03:22:14.250938   65022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:22:14.270111   65022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I0211 03:22:14.270533   65022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44301
	I0211 03:22:14.270710   65022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
	I0211 03:22:14.270928   65022 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:14.271050   65022 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:14.271554   65022 main.go:141] libmachine: Using API Version  1
	I0211 03:22:14.271571   65022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:14.271946   65022 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:14.272048   65022 main.go:141] libmachine: Using API Version  1
	I0211 03:22:14.272054   65022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:14.272107   65022 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:14.272179   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetState
	I0211 03:22:14.272462   65022 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:14.272626   65022 main.go:141] libmachine: Using API Version  1
	I0211 03:22:14.272646   65022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:14.272950   65022 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:14.273417   65022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:14.273449   65022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:14.273540   65022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:14.273562   65022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:14.274677   65022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36457
	I0211 03:22:14.275039   65022 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:14.276259   65022 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-697681"
	W0211 03:22:14.276281   65022 addons.go:247] addon default-storageclass should already be in state true
	I0211 03:22:14.276310   65022 host.go:66] Checking if "default-k8s-diff-port-697681" exists ...
	I0211 03:22:14.276713   65022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:14.276745   65022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:14.277044   65022 main.go:141] libmachine: Using API Version  1
	I0211 03:22:14.277058   65022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:14.277485   65022 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:14.278012   65022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:14.278046   65022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:14.296649   65022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I0211 03:22:14.297294   65022 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:14.297901   65022 main.go:141] libmachine: Using API Version  1
	I0211 03:22:14.297918   65022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:14.298341   65022 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:14.298557   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetState
	I0211 03:22:14.300852   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .DriverName
	I0211 03:22:14.303307   65022 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0211 03:22:14.303734   65022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40803
	I0211 03:22:14.304161   65022 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:14.304444   65022 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0211 03:22:14.304460   65022 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0211 03:22:14.304479   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHHostname
	I0211 03:22:14.309206   65022 main.go:141] libmachine: Using API Version  1
	I0211 03:22:14.309244   65022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:14.309273   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHPort
	I0211 03:22:14.309210   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | domain default-k8s-diff-port-697681 has defined MAC address 52:54:00:e6:45:c4 in network mk-default-k8s-diff-port-697681
	I0211 03:22:14.309344   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:45:c4", ip: ""} in network mk-default-k8s-diff-port-697681: {Iface:virbr4 ExpiryTime:2025-02-11 04:16:24 +0000 UTC Type:0 Mac:52:54:00:e6:45:c4 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:default-k8s-diff-port-697681 Clientid:01:52:54:00:e6:45:c4}
	I0211 03:22:14.309370   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | domain default-k8s-diff-port-697681 has defined IP address 192.168.72.113 and MAC address 52:54:00:e6:45:c4 in network mk-default-k8s-diff-port-697681
	I0211 03:22:14.309475   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHKeyPath
	I0211 03:22:14.309678   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHUsername
	I0211 03:22:14.309818   65022 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/default-k8s-diff-port-697681/id_rsa Username:docker}
	I0211 03:22:14.309879   65022 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:14.310556   65022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:14.310615   65022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:14.314115   65022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45683
	I0211 03:22:14.314610   65022 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:14.315044   65022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39505
	I0211 03:22:14.315132   65022 main.go:141] libmachine: Using API Version  1
	I0211 03:22:14.315144   65022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:14.315505   65022 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:14.315516   65022 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:14.315768   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetState
	I0211 03:22:14.316142   65022 main.go:141] libmachine: Using API Version  1
	I0211 03:22:14.316160   65022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:14.316629   65022 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:14.317171   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetState
	I0211 03:22:14.317564   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .DriverName
	I0211 03:22:14.319321   65022 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:22:14.319819   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .DriverName
	I0211 03:22:14.320612   65022 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 03:22:14.320627   65022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0211 03:22:14.320640   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHHostname
	I0211 03:22:14.321278   65022 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0211 03:22:14.322655   65022 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0211 03:22:15.584111   71847 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0211 03:22:15.584196   71847 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 03:22:15.584308   71847 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 03:22:15.584436   71847 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 03:22:15.584561   71847 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0211 03:22:15.584640   71847 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 03:22:15.586252   71847 out.go:235]   - Generating certificates and keys ...
	I0211 03:22:15.586355   71847 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 03:22:15.586460   71847 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 03:22:15.586569   71847 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0211 03:22:15.586651   71847 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0211 03:22:15.586749   71847 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0211 03:22:15.586817   71847 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0211 03:22:15.586900   71847 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0211 03:22:15.587052   71847 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-649359 localhost] and IPs [192.168.61.224 127.0.0.1 ::1]
	I0211 03:22:15.587132   71847 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0211 03:22:15.587315   71847 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-649359 localhost] and IPs [192.168.61.224 127.0.0.1 ::1]
	I0211 03:22:15.587422   71847 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0211 03:22:15.587561   71847 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0211 03:22:15.587632   71847 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0211 03:22:15.587724   71847 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 03:22:15.587809   71847 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 03:22:15.587903   71847 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0211 03:22:15.588007   71847 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 03:22:15.588087   71847 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 03:22:15.588163   71847 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 03:22:15.588280   71847 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 03:22:15.588396   71847 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 03:22:15.589721   71847 out.go:235]   - Booting up control plane ...
	I0211 03:22:15.589836   71847 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 03:22:15.589934   71847 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 03:22:15.590019   71847 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 03:22:15.590177   71847 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 03:22:15.590289   71847 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 03:22:15.590360   71847 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 03:22:15.590532   71847 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0211 03:22:15.590705   71847 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0211 03:22:15.590802   71847 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001034883s
	I0211 03:22:15.590944   71847 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0211 03:22:15.591026   71847 kubeadm.go:310] [api-check] The API server is healthy after 5.001805598s
	I0211 03:22:15.591173   71847 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0211 03:22:15.591332   71847 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0211 03:22:15.591417   71847 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0211 03:22:15.591656   71847 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-649359 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0211 03:22:15.591735   71847 kubeadm.go:310] [bootstrap-token] Using token: 3y6jz7.f6c4veeakicguifz
	I0211 03:22:15.593264   71847 out.go:235]   - Configuring RBAC rules ...
	I0211 03:22:15.593405   71847 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0211 03:22:15.593503   71847 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0211 03:22:15.593694   71847 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0211 03:22:15.593851   71847 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0211 03:22:15.594021   71847 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0211 03:22:15.594144   71847 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0211 03:22:15.594316   71847 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0211 03:22:15.594391   71847 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0211 03:22:15.594452   71847 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0211 03:22:15.594460   71847 kubeadm.go:310] 
	I0211 03:22:15.594540   71847 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0211 03:22:15.594560   71847 kubeadm.go:310] 
	I0211 03:22:15.594677   71847 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0211 03:22:15.594687   71847 kubeadm.go:310] 
	I0211 03:22:15.594739   71847 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0211 03:22:15.594831   71847 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0211 03:22:15.594926   71847 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0211 03:22:15.594938   71847 kubeadm.go:310] 
	I0211 03:22:15.595020   71847 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0211 03:22:15.595034   71847 kubeadm.go:310] 
	I0211 03:22:15.595098   71847 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0211 03:22:15.595107   71847 kubeadm.go:310] 
	I0211 03:22:15.595185   71847 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0211 03:22:15.595303   71847 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0211 03:22:15.595404   71847 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0211 03:22:15.595418   71847 kubeadm.go:310] 
	I0211 03:22:15.595523   71847 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0211 03:22:15.595630   71847 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0211 03:22:15.595643   71847 kubeadm.go:310] 
	I0211 03:22:15.595763   71847 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3y6jz7.f6c4veeakicguifz \
	I0211 03:22:15.595910   71847 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2e161f5cde2e462cb9fb80847b9945297701bdc8e7251bde04f5738d45684f8b \
	I0211 03:22:15.595941   71847 kubeadm.go:310] 	--control-plane 
	I0211 03:22:15.595949   71847 kubeadm.go:310] 
	I0211 03:22:15.596058   71847 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0211 03:22:15.596071   71847 kubeadm.go:310] 
	I0211 03:22:15.596185   71847 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3y6jz7.f6c4veeakicguifz \
	I0211 03:22:15.596333   71847 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2e161f5cde2e462cb9fb80847b9945297701bdc8e7251bde04f5738d45684f8b 
	I0211 03:22:15.596351   71847 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0211 03:22:15.597833   71847 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0211 03:22:14.323722   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | domain default-k8s-diff-port-697681 has defined MAC address 52:54:00:e6:45:c4 in network mk-default-k8s-diff-port-697681
	I0211 03:22:14.323749   65022 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0211 03:22:14.323763   65022 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0211 03:22:14.323785   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHHostname
	I0211 03:22:14.323883   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:45:c4", ip: ""} in network mk-default-k8s-diff-port-697681: {Iface:virbr4 ExpiryTime:2025-02-11 04:16:24 +0000 UTC Type:0 Mac:52:54:00:e6:45:c4 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:default-k8s-diff-port-697681 Clientid:01:52:54:00:e6:45:c4}
	I0211 03:22:14.323908   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | domain default-k8s-diff-port-697681 has defined IP address 192.168.72.113 and MAC address 52:54:00:e6:45:c4 in network mk-default-k8s-diff-port-697681
	I0211 03:22:14.324053   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHPort
	I0211 03:22:14.324333   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHKeyPath
	I0211 03:22:14.324444   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHUsername
	I0211 03:22:14.324567   65022 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/default-k8s-diff-port-697681/id_rsa Username:docker}
	I0211 03:22:14.329959   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | domain default-k8s-diff-port-697681 has defined MAC address 52:54:00:e6:45:c4 in network mk-default-k8s-diff-port-697681
	I0211 03:22:14.330340   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:45:c4", ip: ""} in network mk-default-k8s-diff-port-697681: {Iface:virbr4 ExpiryTime:2025-02-11 04:16:24 +0000 UTC Type:0 Mac:52:54:00:e6:45:c4 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:default-k8s-diff-port-697681 Clientid:01:52:54:00:e6:45:c4}
	I0211 03:22:14.330367   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | domain default-k8s-diff-port-697681 has defined IP address 192.168.72.113 and MAC address 52:54:00:e6:45:c4 in network mk-default-k8s-diff-port-697681
	I0211 03:22:14.330654   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHPort
	I0211 03:22:14.330835   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHKeyPath
	I0211 03:22:14.330997   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHUsername
	I0211 03:22:14.331104   65022 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/default-k8s-diff-port-697681/id_rsa Username:docker}
	I0211 03:22:14.351572   65022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33919
	I0211 03:22:14.352089   65022 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:14.352737   65022 main.go:141] libmachine: Using API Version  1
	I0211 03:22:14.352769   65022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:14.353350   65022 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:14.353565   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetState
	I0211 03:22:14.355394   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .DriverName
	I0211 03:22:14.355625   65022 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0211 03:22:14.355643   65022 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0211 03:22:14.355663   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHHostname
	I0211 03:22:14.358982   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | domain default-k8s-diff-port-697681 has defined MAC address 52:54:00:e6:45:c4 in network mk-default-k8s-diff-port-697681
	I0211 03:22:14.359560   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:45:c4", ip: ""} in network mk-default-k8s-diff-port-697681: {Iface:virbr4 ExpiryTime:2025-02-11 04:16:24 +0000 UTC Type:0 Mac:52:54:00:e6:45:c4 Iaid: IPaddr:192.168.72.113 Prefix:24 Hostname:default-k8s-diff-port-697681 Clientid:01:52:54:00:e6:45:c4}
	I0211 03:22:14.359582   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | domain default-k8s-diff-port-697681 has defined IP address 192.168.72.113 and MAC address 52:54:00:e6:45:c4 in network mk-default-k8s-diff-port-697681
	I0211 03:22:14.359828   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHPort
	I0211 03:22:14.360014   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHKeyPath
	I0211 03:22:14.360205   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .GetSSHUsername
	I0211 03:22:14.360311   65022 sshutil.go:53] new ssh client: &{IP:192.168.72.113 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/default-k8s-diff-port-697681/id_rsa Username:docker}
	I0211 03:22:14.512017   65022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:22:14.564575   65022 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-697681" to be "Ready" ...
	I0211 03:22:14.591827   65022 node_ready.go:49] node "default-k8s-diff-port-697681" has status "Ready":"True"
	I0211 03:22:14.591852   65022 node_ready.go:38] duration metric: took 27.246872ms for node "default-k8s-diff-port-697681" to be "Ready" ...
	I0211 03:22:14.591863   65022 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:22:14.594962   65022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-qzb2w" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:14.634456   65022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0211 03:22:14.646355   65022 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0211 03:22:14.646379   65022 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0211 03:22:14.683845   65022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 03:22:14.732558   65022 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0211 03:22:14.732619   65022 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0211 03:22:14.754233   65022 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0211 03:22:14.754263   65022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0211 03:22:14.891272   65022 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0211 03:22:14.891310   65022 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0211 03:22:14.901441   65022 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0211 03:22:14.901466   65022 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0211 03:22:14.975242   65022 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0211 03:22:14.975269   65022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0211 03:22:15.066501   65022 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0211 03:22:15.066531   65022 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0211 03:22:15.096806   65022 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0211 03:22:15.096830   65022 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0211 03:22:15.153222   65022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0211 03:22:15.237458   65022 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0211 03:22:15.237484   65022 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0211 03:22:15.290265   65022 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:15.290290   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .Close
	I0211 03:22:15.290607   65022 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:15.290630   65022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:15.290641   65022 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:15.290650   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .Close
	I0211 03:22:15.290970   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | Closing plugin on server side
	I0211 03:22:15.290994   65022 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:15.291009   65022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:15.301177   65022 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:15.301205   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .Close
	I0211 03:22:15.301501   65022 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:15.301517   65022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:15.301515   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | Closing plugin on server side
	I0211 03:22:15.352653   65022 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0211 03:22:15.352682   65022 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0211 03:22:15.440917   65022 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0211 03:22:15.440947   65022 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0211 03:22:15.523385   65022 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0211 03:22:15.523415   65022 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0211 03:22:15.594854   65022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0211 03:22:16.147117   65022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.463232962s)
	I0211 03:22:16.147185   65022 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:16.147198   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .Close
	I0211 03:22:16.147456   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | Closing plugin on server side
	I0211 03:22:16.147511   65022 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:16.147524   65022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:16.147542   65022 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:16.147566   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .Close
	I0211 03:22:16.149532   65022 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:16.149552   65022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:16.709119   65022 pod_ready.go:103] pod "coredns-668d6bf9bc-qzb2w" in "kube-system" namespace has status "Ready":"False"
	I0211 03:22:16.932188   65022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.778916506s)
	I0211 03:22:16.932243   65022 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:16.932259   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .Close
	I0211 03:22:16.932588   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | Closing plugin on server side
	I0211 03:22:16.934298   65022 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:16.934317   65022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:16.934325   65022 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:16.934333   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .Close
	I0211 03:22:16.934623   65022 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:16.934645   65022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:16.934656   65022 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-697681"
	I0211 03:22:17.613375   65022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.018458128s)
	I0211 03:22:17.613434   65022 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:17.613450   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .Close
	I0211 03:22:17.613727   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | Closing plugin on server side
	I0211 03:22:17.613757   65022 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:17.613770   65022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:17.613784   65022 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:17.613796   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) Calling .Close
	I0211 03:22:17.615303   65022 main.go:141] libmachine: (default-k8s-diff-port-697681) DBG | Closing plugin on server side
	I0211 03:22:17.615324   65022 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:17.615346   65022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:17.616747   65022 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-697681 addons enable metrics-server
	
	I0211 03:22:17.618091   65022 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0211 03:22:17.619188   65022 addons.go:514] duration metric: took 3.371556049s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0211 03:22:15.599113   71847 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0211 03:22:15.599164   71847 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I0211 03:22:15.605192   71847 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0211 03:22:15.605224   71847 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0211 03:22:15.634194   71847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0211 03:22:16.244621   71847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-649359 minikube.k8s.io/updated_at=2025_02_11T03_22_16_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321 minikube.k8s.io/name=custom-flannel-649359 minikube.k8s.io/primary=true
	I0211 03:22:16.244621   71847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:22:16.244763   71847 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0211 03:22:16.419146   71847 ops.go:34] apiserver oom_adj: -16
	I0211 03:22:16.419156   71847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:22:16.919569   71847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:22:17.419922   71847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:22:17.919595   71847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:22:18.419684   71847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:22:18.919806   71847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:22:14.083738   73602 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0211 03:22:14.083882   73602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:14.083920   73602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:14.099871   73602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43321
	I0211 03:22:14.100479   73602 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:14.101079   73602 main.go:141] libmachine: Using API Version  1
	I0211 03:22:14.101103   73602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:14.101429   73602 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:14.101609   73602 main.go:141] libmachine: (enable-default-cni-649359) Calling .GetMachineName
	I0211 03:22:14.101777   73602 main.go:141] libmachine: (enable-default-cni-649359) Calling .DriverName
	I0211 03:22:14.101941   73602 start.go:159] libmachine.API.Create for "enable-default-cni-649359" (driver="kvm2")
	I0211 03:22:14.102058   73602 client.go:168] LocalClient.Create starting
	I0211 03:22:14.102103   73602 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem
	I0211 03:22:14.102161   73602 main.go:141] libmachine: Decoding PEM data...
	I0211 03:22:14.102192   73602 main.go:141] libmachine: Parsing certificate...
	I0211 03:22:14.102273   73602 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem
	I0211 03:22:14.102314   73602 main.go:141] libmachine: Decoding PEM data...
	I0211 03:22:14.102335   73602 main.go:141] libmachine: Parsing certificate...
	I0211 03:22:14.102365   73602 main.go:141] libmachine: Running pre-create checks...
	I0211 03:22:14.102388   73602 main.go:141] libmachine: (enable-default-cni-649359) Calling .PreCreateCheck
	I0211 03:22:14.102819   73602 main.go:141] libmachine: (enable-default-cni-649359) Calling .GetConfigRaw
	I0211 03:22:14.103249   73602 main.go:141] libmachine: Creating machine...
	I0211 03:22:14.103262   73602 main.go:141] libmachine: (enable-default-cni-649359) Calling .Create
	I0211 03:22:14.103405   73602 main.go:141] libmachine: (enable-default-cni-649359) creating KVM machine...
	I0211 03:22:14.103427   73602 main.go:141] libmachine: (enable-default-cni-649359) creating network...
	I0211 03:22:14.104796   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | found existing default KVM network
	I0211 03:22:14.106097   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:14.105961   73625 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:87:bb:4d} reservation:<nil>}
	I0211 03:22:14.107223   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:14.107134   73625 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000209f60}
	I0211 03:22:14.107280   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | created network xml: 
	I0211 03:22:14.107303   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | <network>
	I0211 03:22:14.107315   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG |   <name>mk-enable-default-cni-649359</name>
	I0211 03:22:14.107327   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG |   <dns enable='no'/>
	I0211 03:22:14.107335   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG |   
	I0211 03:22:14.107343   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0211 03:22:14.107377   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG |     <dhcp>
	I0211 03:22:14.107398   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0211 03:22:14.107410   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG |     </dhcp>
	I0211 03:22:14.107417   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG |   </ip>
	I0211 03:22:14.107425   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG |   
	I0211 03:22:14.107432   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | </network>
	I0211 03:22:14.107442   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | 
	I0211 03:22:14.112200   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | trying to create private KVM network mk-enable-default-cni-649359 192.168.50.0/24...
	I0211 03:22:14.190773   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | private KVM network mk-enable-default-cni-649359 192.168.50.0/24 created
	I0211 03:22:14.190826   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:14.190725   73625 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:22:14.190850   73602 main.go:141] libmachine: (enable-default-cni-649359) setting up store path in /home/jenkins/minikube-integration/20400-12456/.minikube/machines/enable-default-cni-649359 ...
	I0211 03:22:14.190889   73602 main.go:141] libmachine: (enable-default-cni-649359) building disk image from file:///home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0211 03:22:14.190918   73602 main.go:141] libmachine: (enable-default-cni-649359) Downloading /home/jenkins/minikube-integration/20400-12456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0211 03:22:14.518777   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:14.518635   73625 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/enable-default-cni-649359/id_rsa...
	I0211 03:22:14.713032   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:14.712861   73625 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/enable-default-cni-649359/enable-default-cni-649359.rawdisk...
	I0211 03:22:14.713060   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | Writing magic tar header
	I0211 03:22:14.713079   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | Writing SSH key tar header
	I0211 03:22:14.713093   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:14.712970   73625 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20400-12456/.minikube/machines/enable-default-cni-649359 ...
	I0211 03:22:14.713116   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/enable-default-cni-649359
	I0211 03:22:14.713125   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube/machines
	I0211 03:22:14.713137   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:22:14.713147   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456
	I0211 03:22:14.713158   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0211 03:22:14.713167   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | checking permissions on dir: /home/jenkins
	I0211 03:22:14.713181   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | checking permissions on dir: /home
	I0211 03:22:14.713189   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | skipping /home - not owner
	I0211 03:22:14.713202   73602 main.go:141] libmachine: (enable-default-cni-649359) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube/machines/enable-default-cni-649359 (perms=drwx------)
	I0211 03:22:14.713215   73602 main.go:141] libmachine: (enable-default-cni-649359) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube/machines (perms=drwxr-xr-x)
	I0211 03:22:14.713224   73602 main.go:141] libmachine: (enable-default-cni-649359) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube (perms=drwxr-xr-x)
	I0211 03:22:14.713235   73602 main.go:141] libmachine: (enable-default-cni-649359) setting executable bit set on /home/jenkins/minikube-integration/20400-12456 (perms=drwxrwxr-x)
	I0211 03:22:14.713243   73602 main.go:141] libmachine: (enable-default-cni-649359) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0211 03:22:14.713253   73602 main.go:141] libmachine: (enable-default-cni-649359) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0211 03:22:14.713260   73602 main.go:141] libmachine: (enable-default-cni-649359) creating domain...
	I0211 03:22:14.714254   73602 main.go:141] libmachine: (enable-default-cni-649359) define libvirt domain using xml: 
	I0211 03:22:14.714282   73602 main.go:141] libmachine: (enable-default-cni-649359) <domain type='kvm'>
	I0211 03:22:14.714294   73602 main.go:141] libmachine: (enable-default-cni-649359)   <name>enable-default-cni-649359</name>
	I0211 03:22:14.714310   73602 main.go:141] libmachine: (enable-default-cni-649359)   <memory unit='MiB'>3072</memory>
	I0211 03:22:14.714329   73602 main.go:141] libmachine: (enable-default-cni-649359)   <vcpu>2</vcpu>
	I0211 03:22:14.714337   73602 main.go:141] libmachine: (enable-default-cni-649359)   <features>
	I0211 03:22:14.714385   73602 main.go:141] libmachine: (enable-default-cni-649359)     <acpi/>
	I0211 03:22:14.714416   73602 main.go:141] libmachine: (enable-default-cni-649359)     <apic/>
	I0211 03:22:14.714429   73602 main.go:141] libmachine: (enable-default-cni-649359)     <pae/>
	I0211 03:22:14.714436   73602 main.go:141] libmachine: (enable-default-cni-649359)     
	I0211 03:22:14.714447   73602 main.go:141] libmachine: (enable-default-cni-649359)   </features>
	I0211 03:22:14.714454   73602 main.go:141] libmachine: (enable-default-cni-649359)   <cpu mode='host-passthrough'>
	I0211 03:22:14.714461   73602 main.go:141] libmachine: (enable-default-cni-649359)   
	I0211 03:22:14.714467   73602 main.go:141] libmachine: (enable-default-cni-649359)   </cpu>
	I0211 03:22:14.714479   73602 main.go:141] libmachine: (enable-default-cni-649359)   <os>
	I0211 03:22:14.714485   73602 main.go:141] libmachine: (enable-default-cni-649359)     <type>hvm</type>
	I0211 03:22:14.714501   73602 main.go:141] libmachine: (enable-default-cni-649359)     <boot dev='cdrom'/>
	I0211 03:22:14.714511   73602 main.go:141] libmachine: (enable-default-cni-649359)     <boot dev='hd'/>
	I0211 03:22:14.714519   73602 main.go:141] libmachine: (enable-default-cni-649359)     <bootmenu enable='no'/>
	I0211 03:22:14.714528   73602 main.go:141] libmachine: (enable-default-cni-649359)   </os>
	I0211 03:22:14.714535   73602 main.go:141] libmachine: (enable-default-cni-649359)   <devices>
	I0211 03:22:14.714545   73602 main.go:141] libmachine: (enable-default-cni-649359)     <disk type='file' device='cdrom'>
	I0211 03:22:14.714570   73602 main.go:141] libmachine: (enable-default-cni-649359)       <source file='/home/jenkins/minikube-integration/20400-12456/.minikube/machines/enable-default-cni-649359/boot2docker.iso'/>
	I0211 03:22:14.714582   73602 main.go:141] libmachine: (enable-default-cni-649359)       <target dev='hdc' bus='scsi'/>
	I0211 03:22:14.714590   73602 main.go:141] libmachine: (enable-default-cni-649359)       <readonly/>
	I0211 03:22:14.714602   73602 main.go:141] libmachine: (enable-default-cni-649359)     </disk>
	I0211 03:22:14.714613   73602 main.go:141] libmachine: (enable-default-cni-649359)     <disk type='file' device='disk'>
	I0211 03:22:14.714624   73602 main.go:141] libmachine: (enable-default-cni-649359)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0211 03:22:14.714638   73602 main.go:141] libmachine: (enable-default-cni-649359)       <source file='/home/jenkins/minikube-integration/20400-12456/.minikube/machines/enable-default-cni-649359/enable-default-cni-649359.rawdisk'/>
	I0211 03:22:14.714653   73602 main.go:141] libmachine: (enable-default-cni-649359)       <target dev='hda' bus='virtio'/>
	I0211 03:22:14.714662   73602 main.go:141] libmachine: (enable-default-cni-649359)     </disk>
	I0211 03:22:14.714672   73602 main.go:141] libmachine: (enable-default-cni-649359)     <interface type='network'>
	I0211 03:22:14.714682   73602 main.go:141] libmachine: (enable-default-cni-649359)       <source network='mk-enable-default-cni-649359'/>
	I0211 03:22:14.714692   73602 main.go:141] libmachine: (enable-default-cni-649359)       <model type='virtio'/>
	I0211 03:22:14.714700   73602 main.go:141] libmachine: (enable-default-cni-649359)     </interface>
	I0211 03:22:14.714710   73602 main.go:141] libmachine: (enable-default-cni-649359)     <interface type='network'>
	I0211 03:22:14.714718   73602 main.go:141] libmachine: (enable-default-cni-649359)       <source network='default'/>
	I0211 03:22:14.714725   73602 main.go:141] libmachine: (enable-default-cni-649359)       <model type='virtio'/>
	I0211 03:22:14.714733   73602 main.go:141] libmachine: (enable-default-cni-649359)     </interface>
	I0211 03:22:14.714743   73602 main.go:141] libmachine: (enable-default-cni-649359)     <serial type='pty'>
	I0211 03:22:14.714752   73602 main.go:141] libmachine: (enable-default-cni-649359)       <target port='0'/>
	I0211 03:22:14.714761   73602 main.go:141] libmachine: (enable-default-cni-649359)     </serial>
	I0211 03:22:14.714774   73602 main.go:141] libmachine: (enable-default-cni-649359)     <console type='pty'>
	I0211 03:22:14.714785   73602 main.go:141] libmachine: (enable-default-cni-649359)       <target type='serial' port='0'/>
	I0211 03:22:14.714800   73602 main.go:141] libmachine: (enable-default-cni-649359)     </console>
	I0211 03:22:14.714806   73602 main.go:141] libmachine: (enable-default-cni-649359)     <rng model='virtio'>
	I0211 03:22:14.714814   73602 main.go:141] libmachine: (enable-default-cni-649359)       <backend model='random'>/dev/random</backend>
	I0211 03:22:14.714823   73602 main.go:141] libmachine: (enable-default-cni-649359)     </rng>
	I0211 03:22:14.714831   73602 main.go:141] libmachine: (enable-default-cni-649359)     
	I0211 03:22:14.714840   73602 main.go:141] libmachine: (enable-default-cni-649359)     
	I0211 03:22:14.714848   73602 main.go:141] libmachine: (enable-default-cni-649359)   </devices>
	I0211 03:22:14.714858   73602 main.go:141] libmachine: (enable-default-cni-649359) </domain>
	I0211 03:22:14.714891   73602 main.go:141] libmachine: (enable-default-cni-649359) 
	I0211 03:22:14.719428   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | domain enable-default-cni-649359 has defined MAC address 52:54:00:9b:93:fe in network default
	I0211 03:22:14.719952   73602 main.go:141] libmachine: (enable-default-cni-649359) starting domain...
	I0211 03:22:14.719983   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | domain enable-default-cni-649359 has defined MAC address 52:54:00:13:01:c9 in network mk-enable-default-cni-649359
	I0211 03:22:14.719993   73602 main.go:141] libmachine: (enable-default-cni-649359) ensuring networks are active...
	I0211 03:22:14.720759   73602 main.go:141] libmachine: (enable-default-cni-649359) Ensuring network default is active
	I0211 03:22:14.721156   73602 main.go:141] libmachine: (enable-default-cni-649359) Ensuring network mk-enable-default-cni-649359 is active
	I0211 03:22:14.721734   73602 main.go:141] libmachine: (enable-default-cni-649359) getting domain XML...
	I0211 03:22:14.722483   73602 main.go:141] libmachine: (enable-default-cni-649359) creating domain...
	I0211 03:22:16.126012   73602 main.go:141] libmachine: (enable-default-cni-649359) waiting for IP...
	I0211 03:22:16.126827   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | domain enable-default-cni-649359 has defined MAC address 52:54:00:13:01:c9 in network mk-enable-default-cni-649359
	I0211 03:22:16.127316   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | unable to find current IP address of domain enable-default-cni-649359 in network mk-enable-default-cni-649359
	I0211 03:22:16.127347   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:16.127307   73625 retry.go:31] will retry after 249.798159ms: waiting for domain to come up
	I0211 03:22:16.379184   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | domain enable-default-cni-649359 has defined MAC address 52:54:00:13:01:c9 in network mk-enable-default-cni-649359
	I0211 03:22:16.379941   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | unable to find current IP address of domain enable-default-cni-649359 in network mk-enable-default-cni-649359
	I0211 03:22:16.379975   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:16.379896   73625 retry.go:31] will retry after 323.676586ms: waiting for domain to come up
	I0211 03:22:16.705172   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | domain enable-default-cni-649359 has defined MAC address 52:54:00:13:01:c9 in network mk-enable-default-cni-649359
	I0211 03:22:16.705762   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | unable to find current IP address of domain enable-default-cni-649359 in network mk-enable-default-cni-649359
	I0211 03:22:16.705782   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:16.705738   73625 retry.go:31] will retry after 473.670944ms: waiting for domain to come up
	I0211 03:22:17.181336   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | domain enable-default-cni-649359 has defined MAC address 52:54:00:13:01:c9 in network mk-enable-default-cni-649359
	I0211 03:22:17.181980   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | unable to find current IP address of domain enable-default-cni-649359 in network mk-enable-default-cni-649359
	I0211 03:22:17.182014   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:17.181958   73625 retry.go:31] will retry after 576.073375ms: waiting for domain to come up
	I0211 03:22:17.759677   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | domain enable-default-cni-649359 has defined MAC address 52:54:00:13:01:c9 in network mk-enable-default-cni-649359
	I0211 03:22:17.760222   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | unable to find current IP address of domain enable-default-cni-649359 in network mk-enable-default-cni-649359
	I0211 03:22:17.760248   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:17.760181   73625 retry.go:31] will retry after 596.211322ms: waiting for domain to come up
	I0211 03:22:18.357952   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | domain enable-default-cni-649359 has defined MAC address 52:54:00:13:01:c9 in network mk-enable-default-cni-649359
	I0211 03:22:18.358377   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | unable to find current IP address of domain enable-default-cni-649359 in network mk-enable-default-cni-649359
	I0211 03:22:18.358447   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:18.358353   73625 retry.go:31] will retry after 683.324494ms: waiting for domain to come up
	I0211 03:22:19.419943   71847 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:22:19.566231   71847 kubeadm.go:1113] duration metric: took 3.321687953s to wait for elevateKubeSystemPrivileges
	I0211 03:22:19.566269   71847 kubeadm.go:394] duration metric: took 14.850632389s to StartCluster
	I0211 03:22:19.566292   71847 settings.go:142] acquiring lock: {Name:mkf2645a714cc5873c434b18e1494d4128c48052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:22:19.566355   71847 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:22:19.567541   71847 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/kubeconfig: {Name:mkd961d61f748b29ba3bb0ad55f8216d88f98444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:22:19.567782   71847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0211 03:22:19.567780   71847 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.224 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 03:22:19.567807   71847 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0211 03:22:19.567885   71847 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-649359"
	I0211 03:22:19.567955   71847 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-649359"
	I0211 03:22:19.567988   71847 host.go:66] Checking if "custom-flannel-649359" exists ...
	I0211 03:22:19.568030   71847 config.go:182] Loaded profile config "custom-flannel-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:22:19.567893   71847 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-649359"
	I0211 03:22:19.568213   71847 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-649359"
	I0211 03:22:19.568419   71847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:19.568444   71847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:19.568674   71847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:19.568725   71847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:19.569350   71847 out.go:177] * Verifying Kubernetes components...
	I0211 03:22:19.570677   71847 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:22:19.584858   71847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39291
	I0211 03:22:19.585303   71847 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:19.585834   71847 main.go:141] libmachine: Using API Version  1
	I0211 03:22:19.585859   71847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:19.586230   71847 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:19.586841   71847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:19.586889   71847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:19.587300   71847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44163
	I0211 03:22:19.587769   71847 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:19.588290   71847 main.go:141] libmachine: Using API Version  1
	I0211 03:22:19.588308   71847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:19.588773   71847 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:19.588965   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .GetState
	I0211 03:22:19.592676   71847 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-649359"
	I0211 03:22:19.592720   71847 host.go:66] Checking if "custom-flannel-649359" exists ...
	I0211 03:22:19.593092   71847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:19.593126   71847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:19.604437   71847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35197
	I0211 03:22:19.604813   71847 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:19.605385   71847 main.go:141] libmachine: Using API Version  1
	I0211 03:22:19.605402   71847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:19.606113   71847 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:19.606346   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .GetState
	I0211 03:22:19.608010   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .DriverName
	I0211 03:22:19.610011   71847 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:22:19.611402   71847 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 03:22:19.611417   71847 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0211 03:22:19.611434   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .GetSSHHostname
	I0211 03:22:19.612895   71847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42057
	I0211 03:22:19.613369   71847 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:19.613887   71847 main.go:141] libmachine: Using API Version  1
	I0211 03:22:19.613905   71847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:19.614704   71847 main.go:141] libmachine: (custom-flannel-649359) DBG | domain custom-flannel-649359 has defined MAC address 52:54:00:10:3d:39 in network mk-custom-flannel-649359
	I0211 03:22:19.615108   71847 main.go:141] libmachine: (custom-flannel-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:3d:39", ip: ""} in network mk-custom-flannel-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:21:49 +0000 UTC Type:0 Mac:52:54:00:10:3d:39 Iaid: IPaddr:192.168.61.224 Prefix:24 Hostname:custom-flannel-649359 Clientid:01:52:54:00:10:3d:39}
	I0211 03:22:19.615129   71847 main.go:141] libmachine: (custom-flannel-649359) DBG | domain custom-flannel-649359 has defined IP address 192.168.61.224 and MAC address 52:54:00:10:3d:39 in network mk-custom-flannel-649359
	I0211 03:22:19.615405   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .GetSSHPort
	I0211 03:22:19.615596   71847 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:19.615595   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .GetSSHKeyPath
	I0211 03:22:19.615728   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .GetSSHUsername
	I0211 03:22:19.616044   71847 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:22:19.616075   71847 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:22:19.616343   71847 sshutil.go:53] new ssh client: &{IP:192.168.61.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/custom-flannel-649359/id_rsa Username:docker}
	I0211 03:22:19.634742   71847 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34657
	I0211 03:22:19.635315   71847 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:22:19.635757   71847 main.go:141] libmachine: Using API Version  1
	I0211 03:22:19.635782   71847 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:22:19.636012   71847 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:22:19.636132   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .GetState
	I0211 03:22:19.637565   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .DriverName
	I0211 03:22:19.637768   71847 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0211 03:22:19.637792   71847 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0211 03:22:19.637812   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .GetSSHHostname
	I0211 03:22:19.640582   71847 main.go:141] libmachine: (custom-flannel-649359) DBG | domain custom-flannel-649359 has defined MAC address 52:54:00:10:3d:39 in network mk-custom-flannel-649359
	I0211 03:22:19.640954   71847 main.go:141] libmachine: (custom-flannel-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:10:3d:39", ip: ""} in network mk-custom-flannel-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:21:49 +0000 UTC Type:0 Mac:52:54:00:10:3d:39 Iaid: IPaddr:192.168.61.224 Prefix:24 Hostname:custom-flannel-649359 Clientid:01:52:54:00:10:3d:39}
	I0211 03:22:19.641035   71847 main.go:141] libmachine: (custom-flannel-649359) DBG | domain custom-flannel-649359 has defined IP address 192.168.61.224 and MAC address 52:54:00:10:3d:39 in network mk-custom-flannel-649359
	I0211 03:22:19.641098   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .GetSSHPort
	I0211 03:22:19.641267   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .GetSSHKeyPath
	I0211 03:22:19.641371   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .GetSSHUsername
	I0211 03:22:19.641493   71847 sshutil.go:53] new ssh client: &{IP:192.168.61.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/custom-flannel-649359/id_rsa Username:docker}
	I0211 03:22:19.892304   71847 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:22:19.892576   71847 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0211 03:22:19.949139   71847 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-649359" to be "Ready" ...
	I0211 03:22:20.048456   71847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 03:22:20.055049   71847 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0211 03:22:20.345237   71847 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0211 03:22:20.681873   71847 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:20.681897   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .Close
	I0211 03:22:20.681946   71847 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:20.681969   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .Close
	I0211 03:22:20.682181   71847 main.go:141] libmachine: (custom-flannel-649359) DBG | Closing plugin on server side
	I0211 03:22:20.682238   71847 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:20.682247   71847 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:20.682254   71847 main.go:141] libmachine: (custom-flannel-649359) DBG | Closing plugin on server side
	I0211 03:22:20.682268   71847 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:20.682276   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .Close
	I0211 03:22:20.682277   71847 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:20.682291   71847 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:20.682300   71847 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:20.682445   71847 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:20.682455   71847 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:20.682523   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .Close
	I0211 03:22:20.682726   71847 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:20.682736   71847 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:20.694617   71847 main.go:141] libmachine: Making call to close driver server
	I0211 03:22:20.694636   71847 main.go:141] libmachine: (custom-flannel-649359) Calling .Close
	I0211 03:22:20.694906   71847 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:22:20.694929   71847 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:22:20.694935   71847 main.go:141] libmachine: (custom-flannel-649359) DBG | Closing plugin on server side
	I0211 03:22:20.696539   71847 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0211 03:22:19.100972   65022 pod_ready.go:103] pod "coredns-668d6bf9bc-qzb2w" in "kube-system" namespace has status "Ready":"False"
	I0211 03:22:19.605619   65022 pod_ready.go:93] pod "coredns-668d6bf9bc-qzb2w" in "kube-system" namespace has status "Ready":"True"
	I0211 03:22:19.605641   65022 pod_ready.go:82] duration metric: took 5.010655069s for pod "coredns-668d6bf9bc-qzb2w" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:19.605655   65022 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-rbnbt" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:19.615074   65022 pod_ready.go:93] pod "coredns-668d6bf9bc-rbnbt" in "kube-system" namespace has status "Ready":"True"
	I0211 03:22:19.615091   65022 pod_ready.go:82] duration metric: took 9.429853ms for pod "coredns-668d6bf9bc-rbnbt" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:19.615100   65022 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-697681" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:19.622722   65022 pod_ready.go:93] pod "etcd-default-k8s-diff-port-697681" in "kube-system" namespace has status "Ready":"True"
	I0211 03:22:19.622741   65022 pod_ready.go:82] duration metric: took 7.635185ms for pod "etcd-default-k8s-diff-port-697681" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:19.622750   65022 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-697681" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:19.628117   65022 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-697681" in "kube-system" namespace has status "Ready":"True"
	I0211 03:22:19.628136   65022 pod_ready.go:82] duration metric: took 5.378955ms for pod "kube-apiserver-default-k8s-diff-port-697681" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:19.628148   65022 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-697681" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:19.633232   65022 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-697681" in "kube-system" namespace has status "Ready":"True"
	I0211 03:22:19.633249   65022 pod_ready.go:82] duration metric: took 5.093166ms for pod "kube-controller-manager-default-k8s-diff-port-697681" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:19.633256   65022 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7bbw8" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:19.999300   65022 pod_ready.go:93] pod "kube-proxy-7bbw8" in "kube-system" namespace has status "Ready":"True"
	I0211 03:22:19.999335   65022 pod_ready.go:82] duration metric: took 366.071572ms for pod "kube-proxy-7bbw8" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:19.999356   65022 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-697681" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:20.400925   65022 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-697681" in "kube-system" namespace has status "Ready":"True"
	I0211 03:22:20.400953   65022 pod_ready.go:82] duration metric: took 401.587076ms for pod "kube-scheduler-default-k8s-diff-port-697681" in "kube-system" namespace to be "Ready" ...
	I0211 03:22:20.400962   65022 pod_ready.go:39] duration metric: took 5.809086132s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:22:20.400981   65022 api_server.go:52] waiting for apiserver process to appear ...
	I0211 03:22:20.401038   65022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:22:20.420500   65022 api_server.go:72] duration metric: took 6.172863752s to wait for apiserver process to appear ...
	I0211 03:22:20.420527   65022 api_server.go:88] waiting for apiserver healthz status ...
	I0211 03:22:20.420551   65022 api_server.go:253] Checking apiserver healthz at https://192.168.72.113:8444/healthz ...
	I0211 03:22:20.426573   65022 api_server.go:279] https://192.168.72.113:8444/healthz returned 200:
	ok
	I0211 03:22:20.427928   65022 api_server.go:141] control plane version: v1.32.1
	I0211 03:22:20.427953   65022 api_server.go:131] duration metric: took 7.418234ms to wait for apiserver health ...
	I0211 03:22:20.427963   65022 system_pods.go:43] waiting for kube-system pods to appear ...
	I0211 03:22:20.600692   65022 system_pods.go:59] 9 kube-system pods found
	I0211 03:22:20.600721   65022 system_pods.go:61] "coredns-668d6bf9bc-qzb2w" [6b6ecf1b-77f1-43e9-a77d-d5cad1cf357d] Running
	I0211 03:22:20.600726   65022 system_pods.go:61] "coredns-668d6bf9bc-rbnbt" [ffb6d5ba-f96d-4229-835c-578695acbd83] Running
	I0211 03:22:20.600731   65022 system_pods.go:61] "etcd-default-k8s-diff-port-697681" [62ef7781-b907-4ef8-bce4-f1e4e0319dc4] Running
	I0211 03:22:20.600735   65022 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-697681" [624510f3-6002-45dd-b0f7-c9b34b9cffa4] Running
	I0211 03:22:20.600739   65022 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-697681" [0271e6d0-e782-4506-b443-6fc7cab0dbea] Running
	I0211 03:22:20.600742   65022 system_pods.go:61] "kube-proxy-7bbw8" [857a975a-d2b9-4d90-817b-03c55a2ac976] Running
	I0211 03:22:20.600745   65022 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-697681" [83090f69-b748-49e0-abfd-e83a7d4bca7f] Running
	I0211 03:22:20.600754   65022 system_pods.go:61] "metrics-server-f79f97bbb-x2b2c" [d34f1cf7-6a03-4e40-a506-3f0dc6e4e332] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0211 03:22:20.600765   65022 system_pods.go:61] "storage-provisioner" [c7425194-b650-48be-a453-cdd7993766f7] Running
	I0211 03:22:20.600778   65022 system_pods.go:74] duration metric: took 172.808707ms to wait for pod list to return data ...
	I0211 03:22:20.600790   65022 default_sa.go:34] waiting for default service account to be created ...
	I0211 03:22:20.799943   65022 default_sa.go:45] found service account: "default"
	I0211 03:22:20.799973   65022 default_sa.go:55] duration metric: took 199.173349ms for default service account to be created ...
	I0211 03:22:20.799989   65022 system_pods.go:116] waiting for k8s-apps to be running ...
	I0211 03:22:20.999552   65022 system_pods.go:86] 9 kube-system pods found
	I0211 03:22:20.999595   65022 system_pods.go:89] "coredns-668d6bf9bc-qzb2w" [6b6ecf1b-77f1-43e9-a77d-d5cad1cf357d] Running
	I0211 03:22:20.999606   65022 system_pods.go:89] "coredns-668d6bf9bc-rbnbt" [ffb6d5ba-f96d-4229-835c-578695acbd83] Running
	I0211 03:22:20.999613   65022 system_pods.go:89] "etcd-default-k8s-diff-port-697681" [62ef7781-b907-4ef8-bce4-f1e4e0319dc4] Running
	I0211 03:22:20.999621   65022 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-697681" [624510f3-6002-45dd-b0f7-c9b34b9cffa4] Running
	I0211 03:22:20.999629   65022 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-697681" [0271e6d0-e782-4506-b443-6fc7cab0dbea] Running
	I0211 03:22:20.999635   65022 system_pods.go:89] "kube-proxy-7bbw8" [857a975a-d2b9-4d90-817b-03c55a2ac976] Running
	I0211 03:22:20.999641   65022 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-697681" [83090f69-b748-49e0-abfd-e83a7d4bca7f] Running
	I0211 03:22:20.999652   65022 system_pods.go:89] "metrics-server-f79f97bbb-x2b2c" [d34f1cf7-6a03-4e40-a506-3f0dc6e4e332] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0211 03:22:20.999663   65022 system_pods.go:89] "storage-provisioner" [c7425194-b650-48be-a453-cdd7993766f7] Running
	I0211 03:22:20.999675   65022 system_pods.go:126] duration metric: took 199.677394ms to wait for k8s-apps to be running ...
	I0211 03:22:20.999685   65022 system_svc.go:44] waiting for kubelet service to be running ....
	I0211 03:22:20.999732   65022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 03:22:21.013447   65022 system_svc.go:56] duration metric: took 13.754586ms WaitForService to wait for kubelet
	I0211 03:22:21.013471   65022 kubeadm.go:582] duration metric: took 6.765841728s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:22:21.013489   65022 node_conditions.go:102] verifying NodePressure condition ...
	I0211 03:22:21.200030   65022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0211 03:22:21.200068   65022 node_conditions.go:123] node cpu capacity is 2
	I0211 03:22:21.200083   65022 node_conditions.go:105] duration metric: took 186.587419ms to run NodePressure ...
	I0211 03:22:21.200098   65022 start.go:241] waiting for startup goroutines ...
	I0211 03:22:21.200108   65022 start.go:246] waiting for cluster config update ...
	I0211 03:22:21.200123   65022 start.go:255] writing updated cluster config ...
	I0211 03:22:21.200472   65022 ssh_runner.go:195] Run: rm -f paused
	I0211 03:22:21.260011   65022 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0211 03:22:21.261999   65022 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-697681" cluster and "default" namespace by default
	I0211 03:22:20.697692   71847 addons.go:514] duration metric: took 1.129887068s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0211 03:22:20.849016   71847 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-649359" context rescaled to 1 replicas
	I0211 03:22:21.953290   71847 node_ready.go:53] node "custom-flannel-649359" has status "Ready":"False"
	I0211 03:22:19.043061   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | domain enable-default-cni-649359 has defined MAC address 52:54:00:13:01:c9 in network mk-enable-default-cni-649359
	I0211 03:22:19.043587   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | unable to find current IP address of domain enable-default-cni-649359 in network mk-enable-default-cni-649359
	I0211 03:22:19.043643   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:19.043549   73625 retry.go:31] will retry after 775.380696ms: waiting for domain to come up
	I0211 03:22:19.820481   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | domain enable-default-cni-649359 has defined MAC address 52:54:00:13:01:c9 in network mk-enable-default-cni-649359
	I0211 03:22:19.821035   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | unable to find current IP address of domain enable-default-cni-649359 in network mk-enable-default-cni-649359
	I0211 03:22:19.821081   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:19.821000   73625 retry.go:31] will retry after 1.11240085s: waiting for domain to come up
	I0211 03:22:20.935236   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | domain enable-default-cni-649359 has defined MAC address 52:54:00:13:01:c9 in network mk-enable-default-cni-649359
	I0211 03:22:20.935686   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | unable to find current IP address of domain enable-default-cni-649359 in network mk-enable-default-cni-649359
	I0211 03:22:20.935710   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:20.935648   73625 retry.go:31] will retry after 1.25394351s: waiting for domain to come up
	I0211 03:22:22.190631   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | domain enable-default-cni-649359 has defined MAC address 52:54:00:13:01:c9 in network mk-enable-default-cni-649359
	I0211 03:22:22.191192   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | unable to find current IP address of domain enable-default-cni-649359 in network mk-enable-default-cni-649359
	I0211 03:22:22.191227   73602 main.go:141] libmachine: (enable-default-cni-649359) DBG | I0211 03:22:22.191156   73625 retry.go:31] will retry after 2.079770263s: waiting for domain to come up
	I0211 03:22:26.180696   63944 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0211 03:22:26.180969   63944 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0211 03:22:26.180985   63944 kubeadm.go:310] 
	I0211 03:22:26.181048   63944 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0211 03:22:26.181461   63944 kubeadm.go:310] 		timed out waiting for the condition
	I0211 03:22:26.181482   63944 kubeadm.go:310] 
	I0211 03:22:26.181527   63944 kubeadm.go:310] 	This error is likely caused by:
	I0211 03:22:26.181575   63944 kubeadm.go:310] 		- The kubelet is not running
	I0211 03:22:26.181710   63944 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0211 03:22:26.181722   63944 kubeadm.go:310] 
	I0211 03:22:26.181859   63944 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0211 03:22:26.181905   63944 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0211 03:22:26.181946   63944 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0211 03:22:26.181953   63944 kubeadm.go:310] 
	I0211 03:22:26.182044   63944 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0211 03:22:26.182117   63944 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0211 03:22:26.182125   63944 kubeadm.go:310] 
	I0211 03:22:26.182243   63944 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0211 03:22:26.182320   63944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0211 03:22:26.182404   63944 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0211 03:22:26.182485   63944 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0211 03:22:26.182493   63944 kubeadm.go:310] 
	I0211 03:22:26.184968   63944 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 03:22:26.185102   63944 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0211 03:22:26.185192   63944 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0211 03:22:26.185257   63944 kubeadm.go:394] duration metric: took 7m56.786325849s to StartCluster
	I0211 03:22:26.185298   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0211 03:22:26.185359   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0211 03:22:26.243779   63944 cri.go:89] found id: ""
	I0211 03:22:26.243817   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.243829   63944 logs.go:284] No container was found matching "kube-apiserver"
	I0211 03:22:26.243837   63944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0211 03:22:26.243898   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0211 03:22:26.293825   63944 cri.go:89] found id: ""
	I0211 03:22:26.293856   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.293867   63944 logs.go:284] No container was found matching "etcd"
	I0211 03:22:26.293883   63944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0211 03:22:26.293946   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0211 03:22:26.347783   63944 cri.go:89] found id: ""
	I0211 03:22:26.347817   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.347827   63944 logs.go:284] No container was found matching "coredns"
	I0211 03:22:26.347835   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0211 03:22:26.347902   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0211 03:22:26.408435   63944 cri.go:89] found id: ""
	I0211 03:22:26.408464   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.408474   63944 logs.go:284] No container was found matching "kube-scheduler"
	I0211 03:22:26.408482   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0211 03:22:26.408538   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0211 03:22:26.460484   63944 cri.go:89] found id: ""
	I0211 03:22:26.460510   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.460519   63944 logs.go:284] No container was found matching "kube-proxy"
	I0211 03:22:26.460526   63944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0211 03:22:26.460585   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0211 03:22:26.518551   63944 cri.go:89] found id: ""
	I0211 03:22:26.518576   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.518586   63944 logs.go:284] No container was found matching "kube-controller-manager"
	I0211 03:22:26.518594   63944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0211 03:22:26.518652   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0211 03:22:26.581984   63944 cri.go:89] found id: ""
	I0211 03:22:26.582024   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.582035   63944 logs.go:284] No container was found matching "kindnet"
	I0211 03:22:26.582043   63944 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0211 03:22:26.582105   63944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0211 03:22:26.641024   63944 cri.go:89] found id: ""
	I0211 03:22:26.641051   63944 logs.go:282] 0 containers: []
	W0211 03:22:26.641061   63944 logs.go:284] No container was found matching "kubernetes-dashboard"
	I0211 03:22:26.641073   63944 logs.go:123] Gathering logs for dmesg ...
	I0211 03:22:26.641091   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0211 03:22:26.667223   63944 logs.go:123] Gathering logs for describe nodes ...
	I0211 03:22:26.667258   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0211 03:22:26.781585   63944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0211 03:22:26.781614   63944 logs.go:123] Gathering logs for CRI-O ...
	I0211 03:22:26.781630   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0211 03:22:26.951278   63944 logs.go:123] Gathering logs for container status ...
	I0211 03:22:26.951376   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0211 03:22:27.005666   63944 logs.go:123] Gathering logs for kubelet ...
	I0211 03:22:27.005692   63944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0211 03:22:27.067504   63944 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0211 03:22:27.067561   63944 out.go:270] * 
	W0211 03:22:27.067626   63944 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0211 03:22:27.067652   63944 out.go:270] * 
	W0211 03:22:27.068876   63944 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0211 03:22:27.071907   63944 out.go:201] 
	W0211 03:22:27.073466   63944 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0211 03:22:27.073520   63944 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0211 03:22:27.073549   63944 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0211 03:22:27.074865   63944 out.go:201] 
	
	
	==> CRI-O <==
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.354177192Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739244148354144627,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a59574a-1371-4ac3-bd97-e5cfba69f15f name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.355351465Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=603e525c-eac5-49b9-b42c-e919c2dadaaa name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.355457248Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=603e525c-eac5-49b9-b42c-e919c2dadaaa name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.355505215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=603e525c-eac5-49b9-b42c-e919c2dadaaa name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.397259677Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3df6cc3d-882f-4c6a-b73d-63e52ea86d6a name=/runtime.v1.RuntimeService/Version
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.397352034Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3df6cc3d-882f-4c6a-b73d-63e52ea86d6a name=/runtime.v1.RuntimeService/Version
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.398363778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65356f42-1084-4fbc-9863-c8738c553258 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.399008309Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739244148398965172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65356f42-1084-4fbc-9863-c8738c553258 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.399521718Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a01fa126-9cac-4106-8fb8-5384bec1cb1a name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.399588905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a01fa126-9cac-4106-8fb8-5384bec1cb1a name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.399636983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a01fa126-9cac-4106-8fb8-5384bec1cb1a name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.448421202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c9caad1a-c2a0-4c89-87b1-0241d8b0dae7 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.448563025Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c9caad1a-c2a0-4c89-87b1-0241d8b0dae7 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.449978164Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc23433e-8ccb-48c7-95bd-9860d2bac668 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.450609844Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739244148450546341,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc23433e-8ccb-48c7-95bd-9860d2bac668 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.459767958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7d2bc6c-705d-4ca0-91ce-48d3e0d0deab name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.459850276Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7d2bc6c-705d-4ca0-91ce-48d3e0d0deab name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.459902049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d7d2bc6c-705d-4ca0-91ce-48d3e0d0deab name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.504926922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bff339b6-2f72-45ad-8f88-8ccffb668278 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.504996890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bff339b6-2f72-45ad-8f88-8ccffb668278 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.506508403Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c82e7465-6816-40c1-b3be-748973f6df98 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.506960234Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739244148506934389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c82e7465-6816-40c1-b3be-748973f6df98 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.507470471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f31edaf7-2bde-4e19-868f-fd7c1d112ef5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.507516025Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f31edaf7-2bde-4e19-868f-fd7c1d112ef5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:22:28 old-k8s-version-244815 crio[625]: time="2025-02-11 03:22:28.507547750Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f31edaf7-2bde-4e19-868f-fd7c1d112ef5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb11 03:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053978] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039203] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.074931] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.949355] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.579835] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.978569] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +0.065488] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058561] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.199108] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.179272] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.277363] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +6.347509] systemd-fstab-generator[869]: Ignoring "noauto" option for root device
	[  +0.058497] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.802316] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[ +12.357601] kauditd_printk_skb: 46 callbacks suppressed
	[Feb11 03:18] systemd-fstab-generator[5029]: Ignoring "noauto" option for root device
	[Feb11 03:20] systemd-fstab-generator[5308]: Ignoring "noauto" option for root device
	[  +0.093052] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 03:22:28 up 8 min,  0 users,  load average: 0.07, 0.19, 0.12
	Linux old-k8s-version-244815 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc0008b7cb0)
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]: goroutine 169 [select]:
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00092bef0, 0x4f0ac20, 0xc000c63c70, 0x1, 0xc0001020c0)
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00019c2a0, 0xc0001020c0)
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0003d6980, 0xc0002ebc20)
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Feb 11 03:22:27 old-k8s-version-244815 kubelet[5486]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Feb 11 03:22:27 old-k8s-version-244815 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 11 03:22:27 old-k8s-version-244815 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 11 03:22:28 old-k8s-version-244815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Feb 11 03:22:28 old-k8s-version-244815 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 11 03:22:28 old-k8s-version-244815 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 11 03:22:28 old-k8s-version-244815 kubelet[5584]: I0211 03:22:28.488954    5584 server.go:416] Version: v1.20.0
	Feb 11 03:22:28 old-k8s-version-244815 kubelet[5584]: I0211 03:22:28.489220    5584 server.go:837] Client rotation is on, will bootstrap in background
	Feb 11 03:22:28 old-k8s-version-244815 kubelet[5584]: I0211 03:22:28.491081    5584 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 11 03:22:28 old-k8s-version-244815 kubelet[5584]: W0211 03:22:28.492086    5584 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 11 03:22:28 old-k8s-version-244815 kubelet[5584]: I0211 03:22:28.492208    5584 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-244815 -n old-k8s-version-244815
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 2 (260.073575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-244815" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (508.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
I0211 03:22:45.928086   19645 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:02.579006   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:24:02.585417   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:24:02.596846   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:24:02.618254   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:24:02.659673   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:24:02.741217   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:05.148799   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:07.710514   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:16.210583   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:20.065562   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:23.073494   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:30.819556   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:24:30.826012   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:24:30.837501   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:24:30.858817   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:24:30.900183   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:24:30.981613   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:31.143807   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:24:31.465467   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:32.106983   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:33.388383   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:35.950527   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:41.072169   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:43.555306   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:47.766799   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:24:51.313651   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:25:11.795150   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:25:24.517561   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:25:52.757139   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:25:58.872067   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:25:58.878463   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:25:58.889820   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:25:58.911173   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:25:58.952506   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:25:59.034036   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:25:59.196090   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:25:59.517792   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:26:00.159901   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:26:01.441500   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:26:04.003745   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:26:09.125266   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:26:19.367601   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:26:39.594975   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:26:39.601349   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:26:39.612668   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:26:39.633972   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:26:39.675335   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:26:39.756727   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:26:39.849153   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:26:39.918533   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:26:40.239874   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:26:40.881517   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:26:42.163646   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:26:44.725125   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:26:46.438993   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:26:49.847301   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:27:00.088779   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:27:14.678796   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:27:20.570799   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:27:20.811463   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:27:23.755331   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:27:45.863458   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:27:45.869872   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:27:45.881194   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:27:45.902498   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:27:45.943961   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:27:46.025379   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:27:46.187563   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:27:46.509391   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:27:47.151277   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:27:48.432664   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:27:50.994727   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:27:56.116758   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:01.532360   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:06.358990   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:15.922298   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:15.928636   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:15.939972   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:15.961329   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:16.002707   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:16.084786   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:16.246514   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:16.568284   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:17.210267   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:18.491699   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:21.053392   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:26.175610   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:26.840797   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:36.417810   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:42.733522   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:56.758194   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:56.764541   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:56.775892   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:56.797270   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:56.838623   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:56.900064   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:56.920390   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:57.082473   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:28:57.404112   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:58.045663   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:28:59.327353   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:01.888728   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:02.578217   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:07.010759   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:07.802613   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:08.256107   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:29:08.262486   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:29:08.273787   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:29:08.295115   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:29:08.336489   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:29:08.417946   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:29:08.579511   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:29:08.901335   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:09.542968   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:10.824628   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:13.386753   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:16.210559   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:17.252782   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:18.509078   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:20.065630   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:23.453732   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:28.751306   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:30.280981   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:29:30.820019   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:37.734194   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:29:37.861692   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:49.232601   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:29:58.520094   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:30:18.695599   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:30:29.724228   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:30:30.194930   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:30:58.871669   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:30:59.783719   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:31:26.575678   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-244815 -n old-k8s-version-244815
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 2 (238.503442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "old-k8s-version-244815" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 2 (208.594508ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-244815 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-649359 sudo iptables                       | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo docker                         | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo find                           | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo crio                           | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-649359                                     | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 03:23:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 03:23:13.081035   76224 out.go:345] Setting OutFile to fd 1 ...
	I0211 03:23:13.081187   76224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:23:13.081200   76224 out.go:358] Setting ErrFile to fd 2...
	I0211 03:23:13.081207   76224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:23:13.081496   76224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 03:23:13.082126   76224 out.go:352] Setting JSON to false
	I0211 03:23:13.083210   76224 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7544,"bootTime":1739236649,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 03:23:13.083303   76224 start.go:139] virtualization: kvm guest
	I0211 03:23:13.085425   76224 out.go:177] * [bridge-649359] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 03:23:13.087070   76224 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 03:23:13.087088   76224 notify.go:220] Checking for updates...
	I0211 03:23:13.089378   76224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 03:23:13.090807   76224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:23:13.091907   76224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:23:13.093076   76224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 03:23:13.094188   76224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 03:23:13.095667   76224 config.go:182] Loaded profile config "enable-default-cni-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:23:13.095778   76224 config.go:182] Loaded profile config "flannel-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:23:13.095889   76224 config.go:182] Loaded profile config "old-k8s-version-244815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0211 03:23:13.095994   76224 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 03:23:13.136607   76224 out.go:177] * Using the kvm2 driver based on user configuration
	I0211 03:23:13.137908   76224 start.go:297] selected driver: kvm2
	I0211 03:23:13.137925   76224 start.go:901] validating driver "kvm2" against <nil>
	I0211 03:23:13.137936   76224 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 03:23:13.138755   76224 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:23:13.138832   76224 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20400-12456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0211 03:23:13.155651   76224 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0211 03:23:13.155732   76224 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0211 03:23:13.156061   76224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:23:13.156101   76224 cni.go:84] Creating CNI manager for "bridge"
	I0211 03:23:13.156111   76224 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0211 03:23:13.156178   76224 start.go:340] cluster config:
	{Name:bridge-649359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0211 03:23:13.156321   76224 iso.go:125] acquiring lock: {Name:mkf866c6e52b4efa55cc59a9f329105471716f9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:23:13.158222   76224 out.go:177] * Starting "bridge-649359" primary control-plane node in "bridge-649359" cluster
	I0211 03:23:13.159578   76224 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 03:23:13.159638   76224 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0211 03:23:13.159650   76224 cache.go:56] Caching tarball of preloaded images
	I0211 03:23:13.159745   76224 preload.go:172] Found /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0211 03:23:13.159757   76224 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0211 03:23:13.159900   76224 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/config.json ...
	I0211 03:23:13.159922   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/config.json: {Name:mk2f137687eec59fed010b0831cd63b8499c2c53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:13.160066   76224 start.go:360] acquireMachinesLock for bridge-649359: {Name:mk0cbf79bfabdb28d0a301765db34c154a72eff0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0211 03:23:13.160096   76224 start.go:364] duration metric: took 17.084µs to acquireMachinesLock for "bridge-649359"
	I0211 03:23:13.160114   76224 start.go:93] Provisioning new machine with config: &{Name:bridge-649359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-649359 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 03:23:13.160191   76224 start.go:125] createHost starting for "" (driver="kvm2")
	I0211 03:23:09.983015   73602 pod_ready.go:103] pod "coredns-668d6bf9bc-hvcxh" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:10.983987   73602 pod_ready.go:93] pod "coredns-668d6bf9bc-hvcxh" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:10.984014   73602 pod_ready.go:82] duration metric: took 5.507517178s for pod "coredns-668d6bf9bc-hvcxh" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:10.984026   73602 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-lszj7" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:12.497549   73602 pod_ready.go:98] pod "coredns-668d6bf9bc-lszj7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:12 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.227 HostIPs:[{IP:192.168.50
.227}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-02-11 03:23:05 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-02-11 03:23:06 +0000 UTC,FinishedAt:2025-02-11 03:23:12 +0000 UTC,ContainerID:cri-o://63c8a21527bb528f6980b3e58bd03f4a8eac765b18e787634a7adacf7c5b7e12,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://63c8a21527bb528f6980b3e58bd03f4a8eac765b18e787634a7adacf7c5b7e12 Started:0xc001c4f680 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001ff4170} {Name:kube-api-access-l7qth MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001ff4180}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0211 03:23:12.497589   73602 pod_ready.go:82] duration metric: took 1.513552822s for pod "coredns-668d6bf9bc-lszj7" in "kube-system" namespace to be "Ready" ...
	E0211 03:23:12.497608   73602 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-lszj7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:12 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.5
0.227 HostIPs:[{IP:192.168.50.227}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-02-11 03:23:05 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-02-11 03:23:06 +0000 UTC,FinishedAt:2025-02-11 03:23:12 +0000 UTC,ContainerID:cri-o://63c8a21527bb528f6980b3e58bd03f4a8eac765b18e787634a7adacf7c5b7e12,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://63c8a21527bb528f6980b3e58bd03f4a8eac765b18e787634a7adacf7c5b7e12 Started:0xc001c4f680 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001ff4170} {Name:kube-api-access-l7qth MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001ff4180}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0211 03:23:12.497632   73602 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.503935   73602 pod_ready.go:93] pod "etcd-enable-default-cni-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:14.503963   73602 pod_ready.go:82] duration metric: took 2.006319933s for pod "etcd-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.503988   73602 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.508947   73602 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:14.508976   73602 pod_ready.go:82] duration metric: took 4.979657ms for pod "kube-apiserver-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.508989   73602 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.517657   73602 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:14.517691   73602 pod_ready.go:82] duration metric: took 8.694109ms for pod "kube-controller-manager-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.517708   73602 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-ts7wz" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.525934   73602 pod_ready.go:93] pod "kube-proxy-ts7wz" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:14.525957   73602 pod_ready.go:82] duration metric: took 8.240149ms for pod "kube-proxy-ts7wz" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.525970   73602 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.580286   73602 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:14.580312   73602 pod_ready.go:82] duration metric: took 54.332262ms for pod "kube-scheduler-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.580324   73602 pod_ready.go:39] duration metric: took 9.112283658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:23:14.580342   73602 api_server.go:52] waiting for apiserver process to appear ...
	I0211 03:23:14.580402   73602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:23:14.596650   73602 api_server.go:72] duration metric: took 9.490364s to wait for apiserver process to appear ...
	I0211 03:23:14.596678   73602 api_server.go:88] waiting for apiserver healthz status ...
	I0211 03:23:14.596699   73602 api_server.go:253] Checking apiserver healthz at https://192.168.50.227:8443/healthz ...
	I0211 03:23:14.602310   73602 api_server.go:279] https://192.168.50.227:8443/healthz returned 200:
	ok
	I0211 03:23:14.603319   73602 api_server.go:141] control plane version: v1.32.1
	I0211 03:23:14.603343   73602 api_server.go:131] duration metric: took 6.658485ms to wait for apiserver health ...
	I0211 03:23:14.603353   73602 system_pods.go:43] waiting for kube-system pods to appear ...
	I0211 03:23:14.781953   73602 system_pods.go:59] 7 kube-system pods found
	I0211 03:23:14.781995   73602 system_pods.go:61] "coredns-668d6bf9bc-hvcxh" [09bf1572-919d-44aa-9ec7-8879ade61727] Running
	I0211 03:23:14.782004   73602 system_pods.go:61] "etcd-enable-default-cni-649359" [448e08a5-abac-4a6d-8b4b-e22c331a9fe6] Running
	I0211 03:23:14.782011   73602 system_pods.go:61] "kube-apiserver-enable-default-cni-649359" [3f99d598-6375-4fbe-9003-e5fff13e8393] Running
	I0211 03:23:14.782018   73602 system_pods.go:61] "kube-controller-manager-enable-default-cni-649359" [c7b62bcf-3720-4b14-91de-2a63ea303ea9] Running
	I0211 03:23:14.782023   73602 system_pods.go:61] "kube-proxy-ts7wz" [63d1bb7d-fd8d-49bc-a22f-8df07e7d4e40] Running
	I0211 03:23:14.782030   73602 system_pods.go:61] "kube-scheduler-enable-default-cni-649359" [661b33bc-c632-495f-bda7-5cecf5551b1a] Running
	I0211 03:23:14.782037   73602 system_pods.go:61] "storage-provisioner" [5cd25b79-78ab-4fe4-956b-2fc2424efd9d] Running
	I0211 03:23:14.782046   73602 system_pods.go:74] duration metric: took 178.684869ms to wait for pod list to return data ...
	I0211 03:23:14.782062   73602 default_sa.go:34] waiting for default service account to be created ...
	I0211 03:23:14.982953   73602 default_sa.go:45] found service account: "default"
	I0211 03:23:14.982984   73602 default_sa.go:55] duration metric: took 200.913238ms for default service account to be created ...
	I0211 03:23:14.982997   73602 system_pods.go:116] waiting for k8s-apps to be running ...
	I0211 03:23:15.182233   73602 system_pods.go:86] 7 kube-system pods found
	I0211 03:23:15.182269   73602 system_pods.go:89] "coredns-668d6bf9bc-hvcxh" [09bf1572-919d-44aa-9ec7-8879ade61727] Running
	I0211 03:23:15.182281   73602 system_pods.go:89] "etcd-enable-default-cni-649359" [448e08a5-abac-4a6d-8b4b-e22c331a9fe6] Running
	I0211 03:23:15.182288   73602 system_pods.go:89] "kube-apiserver-enable-default-cni-649359" [3f99d598-6375-4fbe-9003-e5fff13e8393] Running
	I0211 03:23:15.182294   73602 system_pods.go:89] "kube-controller-manager-enable-default-cni-649359" [c7b62bcf-3720-4b14-91de-2a63ea303ea9] Running
	I0211 03:23:15.182299   73602 system_pods.go:89] "kube-proxy-ts7wz" [63d1bb7d-fd8d-49bc-a22f-8df07e7d4e40] Running
	I0211 03:23:15.182305   73602 system_pods.go:89] "kube-scheduler-enable-default-cni-649359" [661b33bc-c632-495f-bda7-5cecf5551b1a] Running
	I0211 03:23:15.182314   73602 system_pods.go:89] "storage-provisioner" [5cd25b79-78ab-4fe4-956b-2fc2424efd9d] Running
	I0211 03:23:15.182325   73602 system_pods.go:126] duration metric: took 199.318436ms to wait for k8s-apps to be running ...
	I0211 03:23:15.182339   73602 system_svc.go:44] waiting for kubelet service to be running ....
	I0211 03:23:15.182396   73602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 03:23:15.197122   73602 system_svc.go:56] duration metric: took 14.775768ms WaitForService to wait for kubelet
	I0211 03:23:15.197147   73602 kubeadm.go:582] duration metric: took 10.090865662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:23:15.197175   73602 node_conditions.go:102] verifying NodePressure condition ...
	I0211 03:23:15.384040   73602 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0211 03:23:15.384075   73602 node_conditions.go:123] node cpu capacity is 2
	I0211 03:23:15.384091   73602 node_conditions.go:105] duration metric: took 186.907093ms to run NodePressure ...
	I0211 03:23:15.384116   73602 start.go:241] waiting for startup goroutines ...
	I0211 03:23:15.384132   73602 start.go:246] waiting for cluster config update ...
	I0211 03:23:15.384147   73602 start.go:255] writing updated cluster config ...
	I0211 03:23:15.384497   73602 ssh_runner.go:195] Run: rm -f paused
	I0211 03:23:15.442411   73602 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0211 03:23:15.445085   73602 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-649359" cluster and "default" namespace by default
	I0211 03:23:15.195574   74474 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.382506392s)
	I0211 03:23:15.195612   74474 crio.go:469] duration metric: took 2.382639633s to extract the tarball
	I0211 03:23:15.195621   74474 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0211 03:23:15.233474   74474 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:23:15.276475   74474 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 03:23:15.276501   74474 cache_images.go:84] Images are preloaded, skipping loading
	I0211 03:23:15.276510   74474 kubeadm.go:934] updating node { 192.168.72.59 8443 v1.32.1 crio true true} ...
	I0211 03:23:15.276617   74474 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-649359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:flannel-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0211 03:23:15.276679   74474 ssh_runner.go:195] Run: crio config
	I0211 03:23:15.329421   74474 cni.go:84] Creating CNI manager for "flannel"
	I0211 03:23:15.329449   74474 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 03:23:15.329503   74474 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.59 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-649359 NodeName:flannel-649359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0211 03:23:15.329667   74474 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-649359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.59"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.59"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 03:23:15.329748   74474 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0211 03:23:15.341419   74474 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 03:23:15.341514   74474 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 03:23:15.351809   74474 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0211 03:23:15.368240   74474 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 03:23:15.388068   74474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0211 03:23:15.406303   74474 ssh_runner.go:195] Run: grep 192.168.72.59	control-plane.minikube.internal$ /etc/hosts
	I0211 03:23:15.410604   74474 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 03:23:15.423051   74474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:23:15.583428   74474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:23:15.611217   74474 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359 for IP: 192.168.72.59
	I0211 03:23:15.611246   74474 certs.go:194] generating shared ca certs ...
	I0211 03:23:15.611270   74474 certs.go:226] acquiring lock for ca certs: {Name:mk14e70e4f3b98aff6eac535114852cc1d70eb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:15.611470   74474 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key
	I0211 03:23:15.611537   74474 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key
	I0211 03:23:15.611554   74474 certs.go:256] generating profile certs ...
	I0211 03:23:15.611652   74474 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.key
	I0211 03:23:15.611677   74474 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt with IP's: []
	I0211 03:23:15.995256   74474 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt ...
	I0211 03:23:15.995283   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: {Name:mkbdf2ec339d7105059cec29fe5c2f5bd0dc1412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:15.995430   74474 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.key ...
	I0211 03:23:15.995440   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.key: {Name:mk7c5762a04702befc810b6a06ee4f9739e5f86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:15.995512   74474 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.key.0cce74ff
	I0211 03:23:15.995527   74474 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.crt.0cce74ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.59]
	I0211 03:23:16.130389   74474 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.crt.0cce74ff ...
	I0211 03:23:16.130415   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.crt.0cce74ff: {Name:mke0717e04de367ea0b393259377ff7fe47ea1c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:16.130570   74474 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.key.0cce74ff ...
	I0211 03:23:16.130582   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.key.0cce74ff: {Name:mk409a3ee4e8749e5e84086d3851197f78ce022a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:16.130647   74474 certs.go:381] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.crt.0cce74ff -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.crt
	I0211 03:23:16.130725   74474 certs.go:385] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.key.0cce74ff -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.key
	I0211 03:23:16.130786   74474 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.key
	I0211 03:23:16.130801   74474 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.crt with IP's: []
	I0211 03:23:16.490091   74474 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.crt ...
	I0211 03:23:16.490127   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.crt: {Name:mke9f0321496c7ad0c90bde87c49c02b8699bb9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:16.490314   74474 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.key ...
	I0211 03:23:16.490332   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.key: {Name:mk7e32c1f4c9365545d3195e51a54f0c9815aad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:16.490528   74474 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem (1338 bytes)
	W0211 03:23:16.490565   74474 certs.go:480] ignoring /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645_empty.pem, impossibly tiny 0 bytes
	I0211 03:23:16.490576   74474 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem (1675 bytes)
	I0211 03:23:16.490598   74474 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem (1078 bytes)
	I0211 03:23:16.490618   74474 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem (1123 bytes)
	I0211 03:23:16.490644   74474 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem (1679 bytes)
	I0211 03:23:16.490684   74474 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:23:16.491315   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 03:23:16.523704   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 03:23:16.549967   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 03:23:16.575117   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0211 03:23:16.608189   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0211 03:23:16.636645   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0211 03:23:16.659645   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 03:23:16.715276   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0211 03:23:16.742104   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem --> /usr/share/ca-certificates/19645.pem (1338 bytes)
	I0211 03:23:16.766537   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /usr/share/ca-certificates/196452.pem (1708 bytes)
	I0211 03:23:16.790666   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 03:23:16.814234   74474 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 03:23:16.832059   74474 ssh_runner.go:195] Run: openssl version
	I0211 03:23:16.837672   74474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19645.pem && ln -fs /usr/share/ca-certificates/19645.pem /etc/ssl/certs/19645.pem"
	I0211 03:23:16.848227   74474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19645.pem
	I0211 03:23:16.852664   74474 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 11 02:09 /usr/share/ca-certificates/19645.pem
	I0211 03:23:16.852725   74474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19645.pem
	I0211 03:23:16.858666   74474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19645.pem /etc/ssl/certs/51391683.0"
	I0211 03:23:16.869139   74474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/196452.pem && ln -fs /usr/share/ca-certificates/196452.pem /etc/ssl/certs/196452.pem"
	I0211 03:23:16.879767   74474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/196452.pem
	I0211 03:23:16.884005   74474 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 11 02:09 /usr/share/ca-certificates/196452.pem
	I0211 03:23:16.884048   74474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/196452.pem
	I0211 03:23:16.889414   74474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/196452.pem /etc/ssl/certs/3ec20f2e.0"
	I0211 03:23:16.903168   74474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 03:23:16.916947   74474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:23:16.922330   74474 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:23:16.922400   74474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:23:16.928123   74474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 03:23:16.939389   74474 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 03:23:16.943085   74474 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0211 03:23:16.943141   74474 kubeadm.go:392] StartCluster: {Name:flannel-649359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-649359 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:23:16.943206   74474 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 03:23:16.943242   74474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 03:23:16.981955   74474 cri.go:89] found id: ""
	I0211 03:23:16.982025   74474 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0211 03:23:16.993058   74474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 03:23:17.002019   74474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 03:23:17.010942   74474 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 03:23:17.010966   74474 kubeadm.go:157] found existing configuration files:
	
	I0211 03:23:17.011017   74474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 03:23:17.019432   74474 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 03:23:17.019497   74474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 03:23:17.029653   74474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 03:23:17.039298   74474 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 03:23:17.039360   74474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 03:23:17.049050   74474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 03:23:17.058831   74474 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 03:23:17.058945   74474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 03:23:17.068991   74474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 03:23:17.078752   74474 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 03:23:17.078811   74474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 03:23:17.089006   74474 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0211 03:23:17.304026   74474 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 03:23:13.167417   76224 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0211 03:23:13.167638   76224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:13.167696   76224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:13.189234   76224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0211 03:23:13.189724   76224 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:13.190382   76224 main.go:141] libmachine: Using API Version  1
	I0211 03:23:13.190408   76224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:13.190728   76224 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:13.190992   76224 main.go:141] libmachine: (bridge-649359) Calling .GetMachineName
	I0211 03:23:13.191140   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:13.191264   76224 start.go:159] libmachine.API.Create for "bridge-649359" (driver="kvm2")
	I0211 03:23:13.191286   76224 client.go:168] LocalClient.Create starting
	I0211 03:23:13.191315   76224 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem
	I0211 03:23:13.191349   76224 main.go:141] libmachine: Decoding PEM data...
	I0211 03:23:13.191362   76224 main.go:141] libmachine: Parsing certificate...
	I0211 03:23:13.191421   76224 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem
	I0211 03:23:13.191440   76224 main.go:141] libmachine: Decoding PEM data...
	I0211 03:23:13.191454   76224 main.go:141] libmachine: Parsing certificate...
	I0211 03:23:13.191471   76224 main.go:141] libmachine: Running pre-create checks...
	I0211 03:23:13.191478   76224 main.go:141] libmachine: (bridge-649359) Calling .PreCreateCheck
	I0211 03:23:13.191910   76224 main.go:141] libmachine: (bridge-649359) Calling .GetConfigRaw
	I0211 03:23:13.192281   76224 main.go:141] libmachine: Creating machine...
	I0211 03:23:13.192294   76224 main.go:141] libmachine: (bridge-649359) Calling .Create
	I0211 03:23:13.192443   76224 main.go:141] libmachine: (bridge-649359) creating KVM machine...
	I0211 03:23:13.192454   76224 main.go:141] libmachine: (bridge-649359) creating network...
	I0211 03:23:13.193881   76224 main.go:141] libmachine: (bridge-649359) DBG | found existing default KVM network
	I0211 03:23:13.195029   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.194906   76257 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:87:bb:4d} reservation:<nil>}
	I0211 03:23:13.195938   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.195871   76257 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:b5:96} reservation:<nil>}
	I0211 03:23:13.197691   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.197624   76257 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000187c60}
	I0211 03:23:13.197851   76224 main.go:141] libmachine: (bridge-649359) DBG | created network xml: 
	I0211 03:23:13.197868   76224 main.go:141] libmachine: (bridge-649359) DBG | <network>
	I0211 03:23:13.197879   76224 main.go:141] libmachine: (bridge-649359) DBG |   <name>mk-bridge-649359</name>
	I0211 03:23:13.197894   76224 main.go:141] libmachine: (bridge-649359) DBG |   <dns enable='no'/>
	I0211 03:23:13.197905   76224 main.go:141] libmachine: (bridge-649359) DBG |   
	I0211 03:23:13.197918   76224 main.go:141] libmachine: (bridge-649359) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0211 03:23:13.197931   76224 main.go:141] libmachine: (bridge-649359) DBG |     <dhcp>
	I0211 03:23:13.197943   76224 main.go:141] libmachine: (bridge-649359) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0211 03:23:13.197954   76224 main.go:141] libmachine: (bridge-649359) DBG |     </dhcp>
	I0211 03:23:13.197961   76224 main.go:141] libmachine: (bridge-649359) DBG |   </ip>
	I0211 03:23:13.197974   76224 main.go:141] libmachine: (bridge-649359) DBG |   
	I0211 03:23:13.197984   76224 main.go:141] libmachine: (bridge-649359) DBG | </network>
	I0211 03:23:13.197996   76224 main.go:141] libmachine: (bridge-649359) DBG | 
	I0211 03:23:13.204135   76224 main.go:141] libmachine: (bridge-649359) DBG | trying to create private KVM network mk-bridge-649359 192.168.61.0/24...
	I0211 03:23:13.289762   76224 main.go:141] libmachine: (bridge-649359) DBG | private KVM network mk-bridge-649359 192.168.61.0/24 created
	I0211 03:23:13.289795   76224 main.go:141] libmachine: (bridge-649359) setting up store path in /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359 ...
	I0211 03:23:13.289809   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.289718   76257 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:23:13.289828   76224 main.go:141] libmachine: (bridge-649359) building disk image from file:///home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0211 03:23:13.289992   76224 main.go:141] libmachine: (bridge-649359) Downloading /home/jenkins/minikube-integration/20400-12456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0211 03:23:13.568686   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.568510   76257 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa...
	I0211 03:23:13.673673   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.673537   76257 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/bridge-649359.rawdisk...
	I0211 03:23:13.673703   76224 main.go:141] libmachine: (bridge-649359) DBG | Writing magic tar header
	I0211 03:23:13.673718   76224 main.go:141] libmachine: (bridge-649359) DBG | Writing SSH key tar header
	I0211 03:23:13.673734   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.673652   76257 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359 ...
	I0211 03:23:13.673808   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359
	I0211 03:23:13.673841   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube/machines
	I0211 03:23:13.673873   76224 main.go:141] libmachine: (bridge-649359) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359 (perms=drwx------)
	I0211 03:23:13.673892   76224 main.go:141] libmachine: (bridge-649359) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube/machines (perms=drwxr-xr-x)
	I0211 03:23:13.673919   76224 main.go:141] libmachine: (bridge-649359) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube (perms=drwxr-xr-x)
	I0211 03:23:13.673932   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:23:13.673948   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456
	I0211 03:23:13.673956   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0211 03:23:13.673965   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home/jenkins
	I0211 03:23:13.673972   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home
	I0211 03:23:13.673982   76224 main.go:141] libmachine: (bridge-649359) DBG | skipping /home - not owner
	I0211 03:23:13.674028   76224 main.go:141] libmachine: (bridge-649359) setting executable bit set on /home/jenkins/minikube-integration/20400-12456 (perms=drwxrwxr-x)
	I0211 03:23:13.674047   76224 main.go:141] libmachine: (bridge-649359) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0211 03:23:13.674064   76224 main.go:141] libmachine: (bridge-649359) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0211 03:23:13.674074   76224 main.go:141] libmachine: (bridge-649359) creating domain...
	I0211 03:23:13.675082   76224 main.go:141] libmachine: (bridge-649359) define libvirt domain using xml: 
	I0211 03:23:13.675104   76224 main.go:141] libmachine: (bridge-649359) <domain type='kvm'>
	I0211 03:23:13.675140   76224 main.go:141] libmachine: (bridge-649359)   <name>bridge-649359</name>
	I0211 03:23:13.675166   76224 main.go:141] libmachine: (bridge-649359)   <memory unit='MiB'>3072</memory>
	I0211 03:23:13.675192   76224 main.go:141] libmachine: (bridge-649359)   <vcpu>2</vcpu>
	I0211 03:23:13.675228   76224 main.go:141] libmachine: (bridge-649359)   <features>
	I0211 03:23:13.675245   76224 main.go:141] libmachine: (bridge-649359)     <acpi/>
	I0211 03:23:13.675256   76224 main.go:141] libmachine: (bridge-649359)     <apic/>
	I0211 03:23:13.675282   76224 main.go:141] libmachine: (bridge-649359)     <pae/>
	I0211 03:23:13.675302   76224 main.go:141] libmachine: (bridge-649359)     
	I0211 03:23:13.675312   76224 main.go:141] libmachine: (bridge-649359)   </features>
	I0211 03:23:13.675331   76224 main.go:141] libmachine: (bridge-649359)   <cpu mode='host-passthrough'>
	I0211 03:23:13.675340   76224 main.go:141] libmachine: (bridge-649359)   
	I0211 03:23:13.675348   76224 main.go:141] libmachine: (bridge-649359)   </cpu>
	I0211 03:23:13.675356   76224 main.go:141] libmachine: (bridge-649359)   <os>
	I0211 03:23:13.675364   76224 main.go:141] libmachine: (bridge-649359)     <type>hvm</type>
	I0211 03:23:13.675371   76224 main.go:141] libmachine: (bridge-649359)     <boot dev='cdrom'/>
	I0211 03:23:13.675381   76224 main.go:141] libmachine: (bridge-649359)     <boot dev='hd'/>
	I0211 03:23:13.675390   76224 main.go:141] libmachine: (bridge-649359)     <bootmenu enable='no'/>
	I0211 03:23:13.675401   76224 main.go:141] libmachine: (bridge-649359)   </os>
	I0211 03:23:13.675408   76224 main.go:141] libmachine: (bridge-649359)   <devices>
	I0211 03:23:13.675429   76224 main.go:141] libmachine: (bridge-649359)     <disk type='file' device='cdrom'>
	I0211 03:23:13.675449   76224 main.go:141] libmachine: (bridge-649359)       <source file='/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/boot2docker.iso'/>
	I0211 03:23:13.675463   76224 main.go:141] libmachine: (bridge-649359)       <target dev='hdc' bus='scsi'/>
	I0211 03:23:13.675473   76224 main.go:141] libmachine: (bridge-649359)       <readonly/>
	I0211 03:23:13.675490   76224 main.go:141] libmachine: (bridge-649359)     </disk>
	I0211 03:23:13.675502   76224 main.go:141] libmachine: (bridge-649359)     <disk type='file' device='disk'>
	I0211 03:23:13.675516   76224 main.go:141] libmachine: (bridge-649359)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0211 03:23:13.675534   76224 main.go:141] libmachine: (bridge-649359)       <source file='/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/bridge-649359.rawdisk'/>
	I0211 03:23:13.675553   76224 main.go:141] libmachine: (bridge-649359)       <target dev='hda' bus='virtio'/>
	I0211 03:23:13.675563   76224 main.go:141] libmachine: (bridge-649359)     </disk>
	I0211 03:23:13.675572   76224 main.go:141] libmachine: (bridge-649359)     <interface type='network'>
	I0211 03:23:13.675583   76224 main.go:141] libmachine: (bridge-649359)       <source network='mk-bridge-649359'/>
	I0211 03:23:13.675593   76224 main.go:141] libmachine: (bridge-649359)       <model type='virtio'/>
	I0211 03:23:13.675602   76224 main.go:141] libmachine: (bridge-649359)     </interface>
	I0211 03:23:13.675611   76224 main.go:141] libmachine: (bridge-649359)     <interface type='network'>
	I0211 03:23:13.675621   76224 main.go:141] libmachine: (bridge-649359)       <source network='default'/>
	I0211 03:23:13.675629   76224 main.go:141] libmachine: (bridge-649359)       <model type='virtio'/>
	I0211 03:23:13.675638   76224 main.go:141] libmachine: (bridge-649359)     </interface>
	I0211 03:23:13.675647   76224 main.go:141] libmachine: (bridge-649359)     <serial type='pty'>
	I0211 03:23:13.675656   76224 main.go:141] libmachine: (bridge-649359)       <target port='0'/>
	I0211 03:23:13.675663   76224 main.go:141] libmachine: (bridge-649359)     </serial>
	I0211 03:23:13.675673   76224 main.go:141] libmachine: (bridge-649359)     <console type='pty'>
	I0211 03:23:13.675682   76224 main.go:141] libmachine: (bridge-649359)       <target type='serial' port='0'/>
	I0211 03:23:13.675692   76224 main.go:141] libmachine: (bridge-649359)     </console>
	I0211 03:23:13.675700   76224 main.go:141] libmachine: (bridge-649359)     <rng model='virtio'>
	I0211 03:23:13.675711   76224 main.go:141] libmachine: (bridge-649359)       <backend model='random'>/dev/random</backend>
	I0211 03:23:13.675719   76224 main.go:141] libmachine: (bridge-649359)     </rng>
	I0211 03:23:13.675728   76224 main.go:141] libmachine: (bridge-649359)     
	I0211 03:23:13.675752   76224 main.go:141] libmachine: (bridge-649359)     
	I0211 03:23:13.675762   76224 main.go:141] libmachine: (bridge-649359)   </devices>
	I0211 03:23:13.675770   76224 main.go:141] libmachine: (bridge-649359) </domain>
	I0211 03:23:13.675779   76224 main.go:141] libmachine: (bridge-649359) 
	I0211 03:23:13.679996   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2b:62:5e in network default
	I0211 03:23:13.680573   76224 main.go:141] libmachine: (bridge-649359) starting domain...
	I0211 03:23:13.680599   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:13.680607   76224 main.go:141] libmachine: (bridge-649359) ensuring networks are active...
	I0211 03:23:13.681392   76224 main.go:141] libmachine: (bridge-649359) Ensuring network default is active
	I0211 03:23:13.681717   76224 main.go:141] libmachine: (bridge-649359) Ensuring network mk-bridge-649359 is active
	I0211 03:23:13.682447   76224 main.go:141] libmachine: (bridge-649359) getting domain XML...
	I0211 03:23:13.683352   76224 main.go:141] libmachine: (bridge-649359) creating domain...
	I0211 03:23:15.051707   76224 main.go:141] libmachine: (bridge-649359) waiting for IP...
	I0211 03:23:15.052588   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:15.053088   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:15.053161   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:15.053065   76257 retry.go:31] will retry after 209.639096ms: waiting for domain to come up
	I0211 03:23:15.264732   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:15.265421   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:15.265457   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:15.265390   76257 retry.go:31] will retry after 262.285345ms: waiting for domain to come up
	I0211 03:23:15.529778   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:15.530315   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:15.530352   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:15.530309   76257 retry.go:31] will retry after 393.216116ms: waiting for domain to come up
	I0211 03:23:15.924884   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:15.925534   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:15.925564   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:15.925493   76257 retry.go:31] will retry after 419.879829ms: waiting for domain to come up
	I0211 03:23:16.347214   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:16.347785   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:16.347809   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:16.347757   76257 retry.go:31] will retry after 550.153899ms: waiting for domain to come up
	I0211 03:23:16.898975   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:16.899431   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:16.899459   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:16.899394   76257 retry.go:31] will retry after 589.858812ms: waiting for domain to come up
	I0211 03:23:17.491285   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:17.491779   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:17.491824   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:17.491763   76257 retry.go:31] will retry after 928.895182ms: waiting for domain to come up
	I0211 03:23:18.422036   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:18.422602   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:18.422658   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:18.422593   76257 retry.go:31] will retry after 1.417755247s: waiting for domain to come up
	I0211 03:23:19.841760   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:19.842278   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:19.842304   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:19.842255   76257 retry.go:31] will retry after 1.224447824s: waiting for domain to come up
	I0211 03:23:21.068177   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:21.068656   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:21.068684   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:21.068629   76257 retry.go:31] will retry after 1.494225448s: waiting for domain to come up
	I0211 03:23:22.564518   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:22.565104   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:22.565138   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:22.565052   76257 retry.go:31] will retry after 1.772565324s: waiting for domain to come up
	I0211 03:23:26.918722   74474 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0211 03:23:26.918779   74474 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 03:23:26.918857   74474 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 03:23:26.918980   74474 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 03:23:26.919097   74474 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0211 03:23:26.919151   74474 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 03:23:26.920561   74474 out.go:235]   - Generating certificates and keys ...
	I0211 03:23:26.920626   74474 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 03:23:26.920677   74474 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 03:23:26.920733   74474 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0211 03:23:26.920785   74474 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0211 03:23:26.920848   74474 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0211 03:23:26.920901   74474 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0211 03:23:26.920948   74474 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0211 03:23:26.921076   74474 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-649359 localhost] and IPs [192.168.72.59 127.0.0.1 ::1]
	I0211 03:23:26.921172   74474 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0211 03:23:26.921288   74474 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-649359 localhost] and IPs [192.168.72.59 127.0.0.1 ::1]
	I0211 03:23:26.921392   74474 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0211 03:23:26.921452   74474 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0211 03:23:26.921498   74474 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0211 03:23:26.921562   74474 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 03:23:26.921630   74474 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 03:23:26.921716   74474 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0211 03:23:26.921805   74474 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 03:23:26.921916   74474 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 03:23:26.922002   74474 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 03:23:26.922131   74474 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 03:23:26.922222   74474 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 03:23:26.923474   74474 out.go:235]   - Booting up control plane ...
	I0211 03:23:26.923561   74474 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 03:23:26.923649   74474 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 03:23:26.923706   74474 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 03:23:26.923812   74474 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 03:23:26.923900   74474 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 03:23:26.923945   74474 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 03:23:26.924054   74474 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0211 03:23:26.924157   74474 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0211 03:23:26.924215   74474 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.704091ms
	I0211 03:23:26.924291   74474 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0211 03:23:26.924363   74474 kubeadm.go:310] [api-check] The API server is healthy after 5.001216504s
	I0211 03:23:26.924491   74474 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0211 03:23:26.924592   74474 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0211 03:23:26.924676   74474 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0211 03:23:26.924893   74474 kubeadm.go:310] [mark-control-plane] Marking the node flannel-649359 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0211 03:23:26.924944   74474 kubeadm.go:310] [bootstrap-token] Using token: ebiq8w.v8x3puwqitndsbjn
	I0211 03:23:26.926051   74474 out.go:235]   - Configuring RBAC rules ...
	I0211 03:23:26.926155   74474 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0211 03:23:26.926227   74474 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0211 03:23:26.926354   74474 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0211 03:23:26.926470   74474 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0211 03:23:26.926585   74474 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0211 03:23:26.926690   74474 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0211 03:23:26.926853   74474 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0211 03:23:26.926942   74474 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0211 03:23:26.927006   74474 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0211 03:23:26.927016   74474 kubeadm.go:310] 
	I0211 03:23:26.927093   74474 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0211 03:23:26.927108   74474 kubeadm.go:310] 
	I0211 03:23:26.927222   74474 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0211 03:23:26.927232   74474 kubeadm.go:310] 
	I0211 03:23:26.927274   74474 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0211 03:23:26.927357   74474 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0211 03:23:26.927439   74474 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0211 03:23:26.927448   74474 kubeadm.go:310] 
	I0211 03:23:26.927495   74474 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0211 03:23:26.927501   74474 kubeadm.go:310] 
	I0211 03:23:26.927541   74474 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0211 03:23:26.927547   74474 kubeadm.go:310] 
	I0211 03:23:26.927590   74474 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0211 03:23:26.927675   74474 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0211 03:23:26.927736   74474 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0211 03:23:26.927743   74474 kubeadm.go:310] 
	I0211 03:23:26.927810   74474 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0211 03:23:26.927871   74474 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0211 03:23:26.927876   74474 kubeadm.go:310] 
	I0211 03:23:26.927943   74474 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ebiq8w.v8x3puwqitndsbjn \
	I0211 03:23:26.928033   74474 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2e161f5cde2e462cb9fb80847b9945297701bdc8e7251bde04f5738d45684f8b \
	I0211 03:23:26.928063   74474 kubeadm.go:310] 	--control-plane 
	I0211 03:23:26.928072   74474 kubeadm.go:310] 
	I0211 03:23:26.928177   74474 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0211 03:23:26.928198   74474 kubeadm.go:310] 
	I0211 03:23:26.928309   74474 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ebiq8w.v8x3puwqitndsbjn \
	I0211 03:23:26.928442   74474 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2e161f5cde2e462cb9fb80847b9945297701bdc8e7251bde04f5738d45684f8b 
	I0211 03:23:26.928455   74474 cni.go:84] Creating CNI manager for "flannel"
	I0211 03:23:26.929837   74474 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0211 03:23:26.931264   74474 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0211 03:23:26.937067   74474 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0211 03:23:26.937088   74474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0211 03:23:26.954563   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0211 03:23:27.384338   74474 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0211 03:23:27.384400   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:27.384471   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-649359 minikube.k8s.io/updated_at=2025_02_11T03_23_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321 minikube.k8s.io/name=flannel-649359 minikube.k8s.io/primary=true
	I0211 03:23:27.414230   74474 ops.go:34] apiserver oom_adj: -16
	I0211 03:23:27.541442   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:24.339475   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:24.340049   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:24.340078   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:24.340007   76257 retry.go:31] will retry after 2.345457885s: waiting for domain to come up
	I0211 03:23:26.687811   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:26.688293   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:26.688321   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:26.688254   76257 retry.go:31] will retry after 3.825044435s: waiting for domain to come up
	I0211 03:23:28.042372   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:28.541540   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:29.042336   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:29.541956   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:30.041506   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:30.541941   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:31.042153   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:31.157212   74474 kubeadm.go:1113] duration metric: took 3.772846263s to wait for elevateKubeSystemPrivileges
	I0211 03:23:31.157259   74474 kubeadm.go:394] duration metric: took 14.214120371s to StartCluster
	I0211 03:23:31.157284   74474 settings.go:142] acquiring lock: {Name:mkf2645a714cc5873c434b18e1494d4128c48052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:31.157377   74474 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:23:31.158947   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/kubeconfig: {Name:mkd961d61f748b29ba3bb0ad55f8216d88f98444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:31.159197   74474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0211 03:23:31.159194   74474 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 03:23:31.159278   74474 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0211 03:23:31.159400   74474 config.go:182] Loaded profile config "flannel-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:23:31.159417   74474 addons.go:69] Setting default-storageclass=true in profile "flannel-649359"
	I0211 03:23:31.159405   74474 addons.go:69] Setting storage-provisioner=true in profile "flannel-649359"
	I0211 03:23:31.159470   74474 addons.go:238] Setting addon storage-provisioner=true in "flannel-649359"
	I0211 03:23:31.159436   74474 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-649359"
	I0211 03:23:31.159522   74474 host.go:66] Checking if "flannel-649359" exists ...
	I0211 03:23:31.159990   74474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:31.160004   74474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:31.160033   74474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:31.160131   74474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:31.160894   74474 out.go:177] * Verifying Kubernetes components...
	I0211 03:23:31.162270   74474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:23:31.175352   74474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0211 03:23:31.175810   74474 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:31.176282   74474 main.go:141] libmachine: Using API Version  1
	I0211 03:23:31.176308   74474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:31.176589   74474 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:31.177193   74474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:31.177240   74474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:31.179744   74474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0211 03:23:31.180108   74474 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:31.180621   74474 main.go:141] libmachine: Using API Version  1
	I0211 03:23:31.180637   74474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:31.180943   74474 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:31.181139   74474 main.go:141] libmachine: (flannel-649359) Calling .GetState
	I0211 03:23:31.184387   74474 addons.go:238] Setting addon default-storageclass=true in "flannel-649359"
	I0211 03:23:31.184427   74474 host.go:66] Checking if "flannel-649359" exists ...
	I0211 03:23:31.184774   74474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:31.184816   74474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:31.192118   74474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36603
	I0211 03:23:31.192590   74474 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:31.193035   74474 main.go:141] libmachine: Using API Version  1
	I0211 03:23:31.193055   74474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:31.193470   74474 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:31.193620   74474 main.go:141] libmachine: (flannel-649359) Calling .GetState
	I0211 03:23:31.195383   74474 main.go:141] libmachine: (flannel-649359) Calling .DriverName
	I0211 03:23:31.196999   74474 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:23:31.198355   74474 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 03:23:31.198376   74474 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0211 03:23:31.198393   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHHostname
	I0211 03:23:31.202178   74474 main.go:141] libmachine: (flannel-649359) DBG | domain flannel-649359 has defined MAC address 52:54:00:7f:c4:fc in network mk-flannel-649359
	I0211 03:23:31.202505   74474 main.go:141] libmachine: (flannel-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:fc", ip: ""} in network mk-flannel-649359: {Iface:virbr4 ExpiryTime:2025-02-11 04:22:58 +0000 UTC Type:0 Mac:52:54:00:7f:c4:fc Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:flannel-649359 Clientid:01:52:54:00:7f:c4:fc}
	I0211 03:23:31.202527   74474 main.go:141] libmachine: (flannel-649359) DBG | domain flannel-649359 has defined IP address 192.168.72.59 and MAC address 52:54:00:7f:c4:fc in network mk-flannel-649359
	I0211 03:23:31.202806   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHPort
	I0211 03:23:31.202984   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHKeyPath
	I0211 03:23:31.203127   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHUsername
	I0211 03:23:31.203233   74474 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/flannel-649359/id_rsa Username:docker}
	I0211 03:23:31.207667   74474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0211 03:23:31.208063   74474 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:31.208524   74474 main.go:141] libmachine: Using API Version  1
	I0211 03:23:31.208541   74474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:31.208888   74474 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:31.209357   74474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:31.209394   74474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:31.225708   74474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41747
	I0211 03:23:31.226181   74474 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:31.226804   74474 main.go:141] libmachine: Using API Version  1
	I0211 03:23:31.226832   74474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:31.227174   74474 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:31.227498   74474 main.go:141] libmachine: (flannel-649359) Calling .GetState
	I0211 03:23:31.229070   74474 main.go:141] libmachine: (flannel-649359) Calling .DriverName
	I0211 03:23:31.229290   74474 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0211 03:23:31.229308   74474 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0211 03:23:31.229327   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHHostname
	I0211 03:23:31.232390   74474 main.go:141] libmachine: (flannel-649359) DBG | domain flannel-649359 has defined MAC address 52:54:00:7f:c4:fc in network mk-flannel-649359
	I0211 03:23:31.232914   74474 main.go:141] libmachine: (flannel-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:fc", ip: ""} in network mk-flannel-649359: {Iface:virbr4 ExpiryTime:2025-02-11 04:22:58 +0000 UTC Type:0 Mac:52:54:00:7f:c4:fc Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:flannel-649359 Clientid:01:52:54:00:7f:c4:fc}
	I0211 03:23:31.232938   74474 main.go:141] libmachine: (flannel-649359) DBG | domain flannel-649359 has defined IP address 192.168.72.59 and MAC address 52:54:00:7f:c4:fc in network mk-flannel-649359
	I0211 03:23:31.233011   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHPort
	I0211 03:23:31.233171   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHKeyPath
	I0211 03:23:31.233295   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHUsername
	I0211 03:23:31.233530   74474 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/flannel-649359/id_rsa Username:docker}
	I0211 03:23:31.512598   74474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:23:31.512814   74474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0211 03:23:31.535457   74474 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 03:23:31.718768   74474 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0211 03:23:31.971800   74474 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0211 03:23:31.972666   74474 node_ready.go:35] waiting up to 15m0s for node "flannel-649359" to be "Ready" ...
	I0211 03:23:32.239075   74474 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:32.239107   74474 main.go:141] libmachine: (flannel-649359) Calling .Close
	I0211 03:23:32.239165   74474 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:32.239214   74474 main.go:141] libmachine: (flannel-649359) Calling .Close
	I0211 03:23:32.239387   74474 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:32.239408   74474 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:32.239417   74474 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:32.239425   74474 main.go:141] libmachine: (flannel-649359) Calling .Close
	I0211 03:23:32.239485   74474 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:32.239507   74474 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:32.239523   74474 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:32.239527   74474 main.go:141] libmachine: (flannel-649359) DBG | Closing plugin on server side
	I0211 03:23:32.239531   74474 main.go:141] libmachine: (flannel-649359) Calling .Close
	I0211 03:23:32.239604   74474 main.go:141] libmachine: (flannel-649359) DBG | Closing plugin on server side
	I0211 03:23:32.239630   74474 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:32.239641   74474 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:32.239919   74474 main.go:141] libmachine: (flannel-649359) DBG | Closing plugin on server side
	I0211 03:23:32.239936   74474 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:32.239948   74474 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:32.239966   74474 main.go:141] libmachine: (flannel-649359) DBG | Closing plugin on server side
	I0211 03:23:32.251488   74474 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:32.251503   74474 main.go:141] libmachine: (flannel-649359) Calling .Close
	I0211 03:23:32.251785   74474 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:32.251803   74474 main.go:141] libmachine: (flannel-649359) DBG | Closing plugin on server side
	I0211 03:23:32.251805   74474 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:32.253228   74474 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0211 03:23:32.254447   74474 addons.go:514] duration metric: took 1.095190594s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0211 03:23:32.475799   74474 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-649359" context rescaled to 1 replicas
	I0211 03:23:30.516131   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:30.516631   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:30.516703   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:30.516626   76257 retry.go:31] will retry after 4.666819437s: waiting for domain to come up
	I0211 03:23:33.976110   74474 node_ready.go:53] node "flannel-649359" has status "Ready":"False"
	I0211 03:23:36.477242   74474 node_ready.go:53] node "flannel-649359" has status "Ready":"False"
	I0211 03:23:35.186578   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.187144   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has current primary IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.187203   76224 main.go:141] libmachine: (bridge-649359) found domain IP: 192.168.61.91
	I0211 03:23:35.187227   76224 main.go:141] libmachine: (bridge-649359) reserving static IP address...
	I0211 03:23:35.187589   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find host DHCP lease matching {name: "bridge-649359", mac: "52:54:00:2f:d7:2b", ip: "192.168.61.91"} in network mk-bridge-649359
	I0211 03:23:35.267976   76224 main.go:141] libmachine: (bridge-649359) DBG | Getting to WaitForSSH function...
	I0211 03:23:35.268008   76224 main.go:141] libmachine: (bridge-649359) reserved static IP address 192.168.61.91 for domain bridge-649359
	I0211 03:23:35.268020   76224 main.go:141] libmachine: (bridge-649359) waiting for SSH...
	I0211 03:23:35.270460   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.270885   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.270915   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.270997   76224 main.go:141] libmachine: (bridge-649359) DBG | Using SSH client type: external
	I0211 03:23:35.271023   76224 main.go:141] libmachine: (bridge-649359) DBG | Using SSH private key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa (-rw-------)
	I0211 03:23:35.271055   76224 main.go:141] libmachine: (bridge-649359) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0211 03:23:35.271074   76224 main.go:141] libmachine: (bridge-649359) DBG | About to run SSH command:
	I0211 03:23:35.271085   76224 main.go:141] libmachine: (bridge-649359) DBG | exit 0
	I0211 03:23:35.402666   76224 main.go:141] libmachine: (bridge-649359) DBG | SSH cmd err, output: <nil>: 
	I0211 03:23:35.402946   76224 main.go:141] libmachine: (bridge-649359) KVM machine creation complete
	I0211 03:23:35.403256   76224 main.go:141] libmachine: (bridge-649359) Calling .GetConfigRaw
	I0211 03:23:35.403871   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:35.404070   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:35.404224   76224 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0211 03:23:35.404243   76224 main.go:141] libmachine: (bridge-649359) Calling .GetState
	I0211 03:23:35.405533   76224 main.go:141] libmachine: Detecting operating system of created instance...
	I0211 03:23:35.405551   76224 main.go:141] libmachine: Waiting for SSH to be available...
	I0211 03:23:35.405559   76224 main.go:141] libmachine: Getting to WaitForSSH function...
	I0211 03:23:35.405621   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:35.408617   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.409051   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.409125   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.409326   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:35.409495   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.409695   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.409843   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:35.410020   76224 main.go:141] libmachine: Using SSH client type: native
	I0211 03:23:35.410255   76224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0211 03:23:35.410284   76224 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0211 03:23:35.526114   76224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 03:23:35.526137   76224 main.go:141] libmachine: Detecting the provisioner...
	I0211 03:23:35.526147   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:35.529034   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.529374   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.529408   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.529677   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:35.529858   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.529998   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.530142   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:35.530313   76224 main.go:141] libmachine: Using SSH client type: native
	I0211 03:23:35.530522   76224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0211 03:23:35.530536   76224 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0211 03:23:35.635554   76224 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0211 03:23:35.635643   76224 main.go:141] libmachine: found compatible host: buildroot
	I0211 03:23:35.635658   76224 main.go:141] libmachine: Provisioning with buildroot...
	I0211 03:23:35.635673   76224 main.go:141] libmachine: (bridge-649359) Calling .GetMachineName
	I0211 03:23:35.635899   76224 buildroot.go:166] provisioning hostname "bridge-649359"
	I0211 03:23:35.635933   76224 main.go:141] libmachine: (bridge-649359) Calling .GetMachineName
	I0211 03:23:35.636138   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:35.638946   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.639414   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.639443   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.639607   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:35.639805   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.639949   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.640086   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:35.640237   76224 main.go:141] libmachine: Using SSH client type: native
	I0211 03:23:35.640437   76224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0211 03:23:35.640452   76224 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-649359 && echo "bridge-649359" | sudo tee /etc/hostname
	I0211 03:23:35.762221   76224 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-649359
	
	I0211 03:23:35.762265   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:35.765124   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.765565   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.765592   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.765798   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:35.765983   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.766134   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.766295   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:35.766454   76224 main.go:141] libmachine: Using SSH client type: native
	I0211 03:23:35.766667   76224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0211 03:23:35.766700   76224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-649359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-649359/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-649359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0211 03:23:35.892813   76224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 03:23:35.892848   76224 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20400-12456/.minikube CaCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20400-12456/.minikube}
	I0211 03:23:35.892882   76224 buildroot.go:174] setting up certificates
	I0211 03:23:35.892898   76224 provision.go:84] configureAuth start
	I0211 03:23:35.892911   76224 main.go:141] libmachine: (bridge-649359) Calling .GetMachineName
	I0211 03:23:35.893179   76224 main.go:141] libmachine: (bridge-649359) Calling .GetIP
	I0211 03:23:35.896644   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.896941   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.896984   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.897119   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:35.899782   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.900151   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.900194   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.900321   76224 provision.go:143] copyHostCerts
	I0211 03:23:35.900388   76224 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem, removing ...
	I0211 03:23:35.900412   76224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem
	I0211 03:23:35.900497   76224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem (1078 bytes)
	I0211 03:23:35.900624   76224 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem, removing ...
	I0211 03:23:35.900631   76224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem
	I0211 03:23:35.900661   76224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem (1123 bytes)
	I0211 03:23:35.900745   76224 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem, removing ...
	I0211 03:23:35.900752   76224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem
	I0211 03:23:35.900780   76224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem (1679 bytes)
	I0211 03:23:35.900854   76224 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem org=jenkins.bridge-649359 san=[127.0.0.1 192.168.61.91 bridge-649359 localhost minikube]
	I0211 03:23:36.073804   76224 provision.go:177] copyRemoteCerts
	I0211 03:23:36.073857   76224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0211 03:23:36.073890   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:36.077003   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.077481   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.077514   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.077769   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:36.077984   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.078141   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:36.078290   76224 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa Username:docker}
	I0211 03:23:36.161453   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0211 03:23:36.189242   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0211 03:23:36.216597   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0211 03:23:36.243510   76224 provision.go:87] duration metric: took 350.596014ms to configureAuth
	I0211 03:23:36.243541   76224 buildroot.go:189] setting minikube options for container-runtime
	I0211 03:23:36.243781   76224 config.go:182] Loaded profile config "bridge-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:23:36.243871   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:36.247213   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.247674   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.247702   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.247936   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:36.248124   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.248314   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.248459   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:36.248635   76224 main.go:141] libmachine: Using SSH client type: native
	I0211 03:23:36.248853   76224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0211 03:23:36.248875   76224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0211 03:23:36.517171   76224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0211 03:23:36.517204   76224 main.go:141] libmachine: Checking connection to Docker...
	I0211 03:23:36.517215   76224 main.go:141] libmachine: (bridge-649359) Calling .GetURL
	I0211 03:23:36.520153   76224 main.go:141] libmachine: (bridge-649359) DBG | using libvirt version 6000000
	I0211 03:23:36.523261   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.523631   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.523666   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.523827   76224 main.go:141] libmachine: Docker is up and running!
	I0211 03:23:36.523859   76224 main.go:141] libmachine: Reticulating splines...
	I0211 03:23:36.523871   76224 client.go:171] duration metric: took 23.332575353s to LocalClient.Create
	I0211 03:23:36.523900   76224 start.go:167] duration metric: took 23.332644457s to libmachine.API.Create "bridge-649359"
	I0211 03:23:36.523913   76224 start.go:293] postStartSetup for "bridge-649359" (driver="kvm2")
	I0211 03:23:36.523929   76224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0211 03:23:36.523954   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:36.524189   76224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0211 03:23:36.524219   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:36.526942   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.527288   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.527312   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.527456   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:36.527617   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.527779   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:36.527903   76224 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa Username:docker}
	I0211 03:23:36.617753   76224 ssh_runner.go:195] Run: cat /etc/os-release
	I0211 03:23:36.622972   76224 info.go:137] Remote host: Buildroot 2023.02.9
	I0211 03:23:36.623024   76224 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/addons for local assets ...
	I0211 03:23:36.623081   76224 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/files for local assets ...
	I0211 03:23:36.623178   76224 filesync.go:149] local asset: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem -> 196452.pem in /etc/ssl/certs
	I0211 03:23:36.623319   76224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0211 03:23:36.636859   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:23:36.666083   76224 start.go:296] duration metric: took 142.139243ms for postStartSetup
	I0211 03:23:36.666143   76224 main.go:141] libmachine: (bridge-649359) Calling .GetConfigRaw
	I0211 03:23:36.666765   76224 main.go:141] libmachine: (bridge-649359) Calling .GetIP
	I0211 03:23:36.670223   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.670654   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.670685   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.671009   76224 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/config.json ...
	I0211 03:23:36.671232   76224 start.go:128] duration metric: took 23.511030087s to createHost
	I0211 03:23:36.671263   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:36.673942   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.674427   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.674449   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.674646   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:36.674823   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.675003   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.675148   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:36.675332   76224 main.go:141] libmachine: Using SSH client type: native
	I0211 03:23:36.675536   76224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0211 03:23:36.675547   76224 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0211 03:23:36.784610   76224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739244216.756269292
	
	I0211 03:23:36.784633   76224 fix.go:216] guest clock: 1739244216.756269292
	I0211 03:23:36.784643   76224 fix.go:229] Guest: 2025-02-11 03:23:36.756269292 +0000 UTC Remote: 2025-02-11 03:23:36.671247216 +0000 UTC m=+23.630270874 (delta=85.022076ms)
	I0211 03:23:36.784669   76224 fix.go:200] guest clock delta is within tolerance: 85.022076ms
	I0211 03:23:36.784676   76224 start.go:83] releasing machines lock for "bridge-649359", held for 23.624569744s
	I0211 03:23:36.784698   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:36.784951   76224 main.go:141] libmachine: (bridge-649359) Calling .GetIP
	I0211 03:23:36.788080   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.788483   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.788511   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.788784   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:36.789525   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:36.789693   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:36.789782   76224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0211 03:23:36.789829   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:36.789909   76224 ssh_runner.go:195] Run: cat /version.json
	I0211 03:23:36.789924   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:36.792931   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.793131   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.793261   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.793288   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.793553   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:36.793622   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.793640   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.793712   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.793762   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:36.793864   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.793870   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:36.793981   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:36.794028   76224 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa Username:docker}
	I0211 03:23:36.794310   76224 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa Username:docker}
	I0211 03:23:36.885484   76224 ssh_runner.go:195] Run: systemctl --version
	I0211 03:23:36.911040   76224 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0211 03:23:37.073802   76224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0211 03:23:37.079969   76224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0211 03:23:37.080026   76224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 03:23:37.096890   76224 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0211 03:23:37.096914   76224 start.go:495] detecting cgroup driver to use...
	I0211 03:23:37.096978   76224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0211 03:23:37.120026   76224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0211 03:23:37.139174   76224 docker.go:217] disabling cri-docker service (if available) ...
	I0211 03:23:37.139247   76224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0211 03:23:37.153365   76224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0211 03:23:37.169285   76224 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0211 03:23:37.306563   76224 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0211 03:23:37.460219   76224 docker.go:233] disabling docker service ...
	I0211 03:23:37.460288   76224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0211 03:23:37.479218   76224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0211 03:23:37.493170   76224 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0211 03:23:37.692462   76224 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0211 03:23:37.842158   76224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0211 03:23:37.856153   76224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0211 03:23:37.873979   76224 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0211 03:23:37.874046   76224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.887261   76224 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0211 03:23:37.887332   76224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.899856   76224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.910779   76224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.921712   76224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0211 03:23:37.933184   76224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.943491   76224 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.962546   76224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.973182   76224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0211 03:23:37.985022   76224 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0211 03:23:37.985087   76224 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0211 03:23:38.001415   76224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0211 03:23:38.013337   76224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:23:38.159717   76224 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0211 03:23:38.264071   76224 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0211 03:23:38.264138   76224 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0211 03:23:38.269045   76224 start.go:563] Will wait 60s for crictl version
	I0211 03:23:38.269103   76224 ssh_runner.go:195] Run: which crictl
	I0211 03:23:38.273062   76224 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0211 03:23:38.325620   76224 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0211 03:23:38.325708   76224 ssh_runner.go:195] Run: crio --version
	I0211 03:23:38.364258   76224 ssh_runner.go:195] Run: crio --version
	I0211 03:23:38.407529   76224 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0211 03:23:38.977889   74474 node_ready.go:49] node "flannel-649359" has status "Ready":"True"
	I0211 03:23:38.977915   74474 node_ready.go:38] duration metric: took 7.005228002s for node "flannel-649359" to be "Ready" ...
	I0211 03:23:38.977927   74474 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:23:38.983312   74474 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:40.991441   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:38.408768   76224 main.go:141] libmachine: (bridge-649359) Calling .GetIP
	I0211 03:23:38.411980   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:38.412508   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:38.412536   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:38.412760   76224 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0211 03:23:38.417850   76224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 03:23:38.432430   76224 kubeadm.go:883] updating cluster {Name:bridge-649359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-649359 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0211 03:23:38.432565   76224 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 03:23:38.432625   76224 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:23:38.467470   76224 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0211 03:23:38.467540   76224 ssh_runner.go:195] Run: which lz4
	I0211 03:23:38.472291   76224 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0211 03:23:38.477628   76224 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0211 03:23:38.477655   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0211 03:23:39.812140   76224 crio.go:462] duration metric: took 1.339890711s to copy over tarball
	I0211 03:23:39.812220   76224 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0211 03:23:42.174273   76224 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.362027518s)
	I0211 03:23:42.174298   76224 crio.go:469] duration metric: took 2.362130701s to extract the tarball
	I0211 03:23:42.174308   76224 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0211 03:23:42.212441   76224 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:23:42.259137   76224 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 03:23:42.259167   76224 cache_images.go:84] Images are preloaded, skipping loading
	I0211 03:23:42.259183   76224 kubeadm.go:934] updating node { 192.168.61.91 8443 v1.32.1 crio true true} ...
	I0211 03:23:42.259323   76224 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-649359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0211 03:23:42.259405   76224 ssh_runner.go:195] Run: crio config
	I0211 03:23:42.310270   76224 cni.go:84] Creating CNI manager for "bridge"
	I0211 03:23:42.310296   76224 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 03:23:42.310319   76224 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-649359 NodeName:bridge-649359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0211 03:23:42.310444   76224 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-649359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.91"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 03:23:42.310509   76224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0211 03:23:42.321508   76224 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 03:23:42.321574   76224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 03:23:42.331133   76224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0211 03:23:42.348183   76224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 03:23:42.363793   76224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0211 03:23:42.380017   76224 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0211 03:23:42.385032   76224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 03:23:42.397563   76224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:23:42.523999   76224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:23:42.541483   76224 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359 for IP: 192.168.61.91
	I0211 03:23:42.541513   76224 certs.go:194] generating shared ca certs ...
	I0211 03:23:42.541537   76224 certs.go:226] acquiring lock for ca certs: {Name:mk14e70e4f3b98aff6eac535114852cc1d70eb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.541716   76224 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key
	I0211 03:23:42.541775   76224 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key
	I0211 03:23:42.541788   76224 certs.go:256] generating profile certs ...
	I0211 03:23:42.541855   76224 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.key
	I0211 03:23:42.541872   76224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt with IP's: []
	I0211 03:23:42.645334   76224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt ...
	I0211 03:23:42.645359   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: {Name:mk0338e38361e05c75c2b3a994416e9f58924163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.645552   76224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.key ...
	I0211 03:23:42.645567   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.key: {Name:mk6d12c32427bf53d242b075e652c1ff02636b6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.645670   76224 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.key.aa660d69
	I0211 03:23:42.645686   76224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.crt.aa660d69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.91]
	I0211 03:23:42.778443   76224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.crt.aa660d69 ...
	I0211 03:23:42.778468   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.crt.aa660d69: {Name:mkd0a56a0a00f7f1b41760f04afa85e6e0184dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.778628   76224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.key.aa660d69 ...
	I0211 03:23:42.778645   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.key.aa660d69: {Name:mke07d566ab34107bd02ef3f5e64b95800771781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.778742   76224 certs.go:381] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.crt.aa660d69 -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.crt
	I0211 03:23:42.778827   76224 certs.go:385] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.key.aa660d69 -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.key
	I0211 03:23:42.778920   76224 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.key
	I0211 03:23:42.778940   76224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.crt with IP's: []
	I0211 03:23:42.875092   76224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.crt ...
	I0211 03:23:42.875121   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.crt: {Name:mkeabdd180390de46ff9b6cea91ea3abddccb352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.875295   76224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.key ...
	I0211 03:23:42.875311   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.key: {Name:mkd0db8b768de67babf0cb224e84ca4a2da93731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.875495   76224 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem (1338 bytes)
	W0211 03:23:42.875544   76224 certs.go:480] ignoring /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645_empty.pem, impossibly tiny 0 bytes
	I0211 03:23:42.875560   76224 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem (1675 bytes)
	I0211 03:23:42.875605   76224 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem (1078 bytes)
	I0211 03:23:42.875637   76224 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem (1123 bytes)
	I0211 03:23:42.875670   76224 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem (1679 bytes)
	I0211 03:23:42.875727   76224 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:23:42.876268   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 03:23:42.901656   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 03:23:42.927101   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 03:23:42.954623   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0211 03:23:42.980378   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0211 03:23:43.009888   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0211 03:23:43.034956   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 03:23:43.060866   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0211 03:23:43.084114   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 03:23:43.105495   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem --> /usr/share/ca-certificates/19645.pem (1338 bytes)
	I0211 03:23:43.131325   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /usr/share/ca-certificates/196452.pem (1708 bytes)
	I0211 03:23:43.159178   76224 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 03:23:43.176120   76224 ssh_runner.go:195] Run: openssl version
	I0211 03:23:43.181981   76224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 03:23:43.192433   76224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:23:43.196878   76224 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:23:43.196935   76224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:23:43.202795   76224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 03:23:43.212979   76224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19645.pem && ln -fs /usr/share/ca-certificates/19645.pem /etc/ssl/certs/19645.pem"
	I0211 03:23:43.228900   76224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19645.pem
	I0211 03:23:43.234639   76224 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 11 02:09 /usr/share/ca-certificates/19645.pem
	I0211 03:23:43.234695   76224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19645.pem
	I0211 03:23:43.242209   76224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19645.pem /etc/ssl/certs/51391683.0"
	I0211 03:23:43.259478   76224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/196452.pem && ln -fs /usr/share/ca-certificates/196452.pem /etc/ssl/certs/196452.pem"
	I0211 03:23:43.276539   76224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/196452.pem
	I0211 03:23:43.282230   76224 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 11 02:09 /usr/share/ca-certificates/196452.pem
	I0211 03:23:43.282289   76224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/196452.pem
	I0211 03:23:43.289640   76224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/196452.pem /etc/ssl/certs/3ec20f2e.0"
	I0211 03:23:43.306157   76224 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 03:23:43.311168   76224 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0211 03:23:43.311222   76224 kubeadm.go:392] StartCluster: {Name:bridge-649359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-649359 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:23:43.311303   76224 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 03:23:43.311364   76224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 03:23:43.347602   76224 cri.go:89] found id: ""
	I0211 03:23:43.347669   76224 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0211 03:23:43.356681   76224 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 03:23:43.365865   76224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 03:23:43.377932   76224 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 03:23:43.377951   76224 kubeadm.go:157] found existing configuration files:
	
	I0211 03:23:43.377992   76224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 03:23:43.388582   76224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 03:23:43.388647   76224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 03:23:43.398236   76224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 03:23:43.409266   76224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 03:23:43.409330   76224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 03:23:43.418959   76224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 03:23:43.427405   76224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 03:23:43.427446   76224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 03:23:43.436462   76224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 03:23:43.448305   76224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 03:23:43.448356   76224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 03:23:43.457848   76224 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0211 03:23:43.514222   76224 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0211 03:23:43.514402   76224 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 03:23:43.614485   76224 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 03:23:43.614631   76224 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 03:23:43.614780   76224 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0211 03:23:43.627379   76224 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 03:23:43.960452   76224 out.go:235]   - Generating certificates and keys ...
	I0211 03:23:43.960574   76224 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 03:23:43.960650   76224 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 03:23:43.960751   76224 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0211 03:23:44.201857   76224 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0211 03:23:44.476985   76224 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0211 03:23:44.542297   76224 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0211 03:23:44.649174   76224 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0211 03:23:44.649387   76224 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-649359 localhost] and IPs [192.168.61.91 127.0.0.1 ::1]
	I0211 03:23:44.759204   76224 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0211 03:23:44.759426   76224 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-649359 localhost] and IPs [192.168.61.91 127.0.0.1 ::1]
	I0211 03:23:44.917127   76224 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0211 03:23:45.151892   76224 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0211 03:23:45.330088   76224 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0211 03:23:45.330501   76224 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 03:23:45.521286   76224 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 03:23:45.681260   76224 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0211 03:23:45.749229   76224 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 03:23:45.847105   76224 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 03:23:45.955946   76224 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 03:23:45.956453   76224 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 03:23:45.960579   76224 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 03:23:43.489331   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:45.489732   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:46.051204   76224 out.go:235]   - Booting up control plane ...
	I0211 03:23:46.051361   76224 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 03:23:46.051453   76224 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 03:23:46.051580   76224 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 03:23:46.051746   76224 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 03:23:46.051879   76224 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 03:23:46.051971   76224 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 03:23:46.119740   76224 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0211 03:23:46.119907   76224 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0211 03:23:46.620433   76224 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.018001ms
	I0211 03:23:46.620551   76224 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0211 03:23:51.619644   76224 kubeadm.go:310] [api-check] The API server is healthy after 5.002019751s
	I0211 03:23:51.638423   76224 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0211 03:23:51.657182   76224 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0211 03:23:51.698624   76224 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0211 03:23:51.698895   76224 kubeadm.go:310] [mark-control-plane] Marking the node bridge-649359 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0211 03:23:51.711403   76224 kubeadm.go:310] [bootstrap-token] Using token: 8iaz75.2wbh73x0qbtaotir
	I0211 03:23:47.989129   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:50.488696   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:52.489440   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:51.712648   76224 out.go:235]   - Configuring RBAC rules ...
	I0211 03:23:51.712802   76224 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0211 03:23:51.722004   76224 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0211 03:23:51.731386   76224 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0211 03:23:51.735273   76224 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0211 03:23:51.739648   76224 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0211 03:23:51.748713   76224 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0211 03:23:52.026065   76224 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0211 03:23:52.448492   76224 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0211 03:23:53.026361   76224 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0211 03:23:53.026394   76224 kubeadm.go:310] 
	I0211 03:23:53.026473   76224 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0211 03:23:53.026485   76224 kubeadm.go:310] 
	I0211 03:23:53.026596   76224 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0211 03:23:53.026607   76224 kubeadm.go:310] 
	I0211 03:23:53.026640   76224 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0211 03:23:53.026761   76224 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0211 03:23:53.026848   76224 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0211 03:23:53.026861   76224 kubeadm.go:310] 
	I0211 03:23:53.026960   76224 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0211 03:23:53.026972   76224 kubeadm.go:310] 
	I0211 03:23:53.027033   76224 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0211 03:23:53.027047   76224 kubeadm.go:310] 
	I0211 03:23:53.027091   76224 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0211 03:23:53.027155   76224 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0211 03:23:53.027225   76224 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0211 03:23:53.027242   76224 kubeadm.go:310] 
	I0211 03:23:53.027376   76224 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0211 03:23:53.027479   76224 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0211 03:23:53.027489   76224 kubeadm.go:310] 
	I0211 03:23:53.027588   76224 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8iaz75.2wbh73x0qbtaotir \
	I0211 03:23:53.027754   76224 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2e161f5cde2e462cb9fb80847b9945297701bdc8e7251bde04f5738d45684f8b \
	I0211 03:23:53.027794   76224 kubeadm.go:310] 	--control-plane 
	I0211 03:23:53.027804   76224 kubeadm.go:310] 
	I0211 03:23:53.027924   76224 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0211 03:23:53.027934   76224 kubeadm.go:310] 
	I0211 03:23:53.028040   76224 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8iaz75.2wbh73x0qbtaotir \
	I0211 03:23:53.028162   76224 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2e161f5cde2e462cb9fb80847b9945297701bdc8e7251bde04f5738d45684f8b 
	I0211 03:23:53.028360   76224 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 03:23:53.028615   76224 cni.go:84] Creating CNI manager for "bridge"
	I0211 03:23:53.030131   76224 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0211 03:23:53.031456   76224 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0211 03:23:53.042986   76224 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0211 03:23:53.061766   76224 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0211 03:23:53.061896   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:53.061898   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-649359 minikube.k8s.io/updated_at=2025_02_11T03_23_53_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321 minikube.k8s.io/name=bridge-649359 minikube.k8s.io/primary=true
	I0211 03:23:53.200359   76224 ops.go:34] apiserver oom_adj: -16
	I0211 03:23:53.203298   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:53.704043   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:54.204265   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:54.704063   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:55.204057   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:55.704093   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:56.204089   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:56.285166   76224 kubeadm.go:1113] duration metric: took 3.223324508s to wait for elevateKubeSystemPrivileges
	I0211 03:23:56.285198   76224 kubeadm.go:394] duration metric: took 12.973978579s to StartCluster
	I0211 03:23:56.285228   76224 settings.go:142] acquiring lock: {Name:mkf2645a714cc5873c434b18e1494d4128c48052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:56.285310   76224 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:23:56.286865   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/kubeconfig: {Name:mkd961d61f748b29ba3bb0ad55f8216d88f98444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:56.287154   76224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0211 03:23:56.287177   76224 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 03:23:56.287229   76224 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0211 03:23:56.287317   76224 addons.go:69] Setting storage-provisioner=true in profile "bridge-649359"
	I0211 03:23:56.287337   76224 addons.go:238] Setting addon storage-provisioner=true in "bridge-649359"
	I0211 03:23:56.287349   76224 addons.go:69] Setting default-storageclass=true in profile "bridge-649359"
	I0211 03:23:56.287368   76224 host.go:66] Checking if "bridge-649359" exists ...
	I0211 03:23:56.287392   76224 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-649359"
	I0211 03:23:56.287444   76224 config.go:182] Loaded profile config "bridge-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:23:56.287874   76224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:56.287916   76224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:56.287929   76224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:56.287963   76224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:56.288527   76224 out.go:177] * Verifying Kubernetes components...
	I0211 03:23:56.289823   76224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:23:56.304775   76224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34409
	I0211 03:23:56.305153   76224 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:56.305632   76224 main.go:141] libmachine: Using API Version  1
	I0211 03:23:56.305658   76224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:56.305984   76224 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:56.306239   76224 main.go:141] libmachine: (bridge-649359) Calling .GetState
	I0211 03:23:56.308210   76224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0211 03:23:56.308528   76224 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:56.308996   76224 main.go:141] libmachine: Using API Version  1
	I0211 03:23:56.309015   76224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:56.309301   76224 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:56.309726   76224 addons.go:238] Setting addon default-storageclass=true in "bridge-649359"
	I0211 03:23:56.309764   76224 host.go:66] Checking if "bridge-649359" exists ...
	I0211 03:23:56.309864   76224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:56.309894   76224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:56.310098   76224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:56.310135   76224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:56.324154   76224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41297
	I0211 03:23:56.324556   76224 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:56.325003   76224 main.go:141] libmachine: Using API Version  1
	I0211 03:23:56.325017   76224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:56.325276   76224 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:56.325490   76224 main.go:141] libmachine: (bridge-649359) Calling .GetState
	I0211 03:23:56.325869   76224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42691
	I0211 03:23:56.326407   76224 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:56.326922   76224 main.go:141] libmachine: Using API Version  1
	I0211 03:23:56.326946   76224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:56.327340   76224 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:56.327456   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:56.327952   76224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:56.327990   76224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:56.329016   76224 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:23:54.492072   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:55.494321   74474 pod_ready.go:93] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:55.494351   74474 pod_ready.go:82] duration metric: took 16.51100516s for pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.494365   74474 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.507277   74474 pod_ready.go:93] pod "etcd-flannel-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:55.507298   74474 pod_ready.go:82] duration metric: took 12.925664ms for pod "etcd-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.507308   74474 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.516108   74474 pod_ready.go:93] pod "kube-apiserver-flannel-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:55.516128   74474 pod_ready.go:82] duration metric: took 8.814128ms for pod "kube-apiserver-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.516137   74474 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.523571   74474 pod_ready.go:93] pod "kube-controller-manager-flannel-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:55.523589   74474 pod_ready.go:82] duration metric: took 7.446719ms for pod "kube-controller-manager-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.523597   74474 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-svqjf" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.528072   74474 pod_ready.go:93] pod "kube-proxy-svqjf" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:55.528088   74474 pod_ready.go:82] duration metric: took 4.48524ms for pod "kube-proxy-svqjf" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.528096   74474 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.887698   74474 pod_ready.go:93] pod "kube-scheduler-flannel-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:55.887720   74474 pod_ready.go:82] duration metric: took 359.618939ms for pod "kube-scheduler-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.887735   74474 pod_ready.go:39] duration metric: took 16.909780883s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:23:55.887755   74474 api_server.go:52] waiting for apiserver process to appear ...
	I0211 03:23:55.887802   74474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:23:55.902278   74474 api_server.go:72] duration metric: took 24.7430451s to wait for apiserver process to appear ...
	I0211 03:23:55.902297   74474 api_server.go:88] waiting for apiserver healthz status ...
	I0211 03:23:55.902312   74474 api_server.go:253] Checking apiserver healthz at https://192.168.72.59:8443/healthz ...
	I0211 03:23:55.907542   74474 api_server.go:279] https://192.168.72.59:8443/healthz returned 200:
	ok
	I0211 03:23:55.908534   74474 api_server.go:141] control plane version: v1.32.1
	I0211 03:23:55.908552   74474 api_server.go:131] duration metric: took 6.249772ms to wait for apiserver health ...
	I0211 03:23:55.908559   74474 system_pods.go:43] waiting for kube-system pods to appear ...
	I0211 03:23:56.088276   74474 system_pods.go:59] 7 kube-system pods found
	I0211 03:23:56.088313   74474 system_pods.go:61] "coredns-668d6bf9bc-ktrqg" [25de257a-811d-450d-9f38-d3cbbe560bc7] Running
	I0211 03:23:56.088320   74474 system_pods.go:61] "etcd-flannel-649359" [446a2b15-8be7-4fbb-9ba5-80ad99efe86f] Running
	I0211 03:23:56.088326   74474 system_pods.go:61] "kube-apiserver-flannel-649359" [3aa2d94e-4bac-4f10-92eb-62c5bc8a9497] Running
	I0211 03:23:56.088331   74474 system_pods.go:61] "kube-controller-manager-flannel-649359" [e38e3569-0f84-46e3-9cb3-d55fc351cf71] Running
	I0211 03:23:56.088335   74474 system_pods.go:61] "kube-proxy-svqjf" [837e961f-4d98-436b-8d7b-1c58fc12c210] Running
	I0211 03:23:56.088340   74474 system_pods.go:61] "kube-scheduler-flannel-649359" [dc8b4f66-a15b-429e-8c5d-564222514190] Running
	I0211 03:23:56.088344   74474 system_pods.go:61] "storage-provisioner" [4006f839-4055-49b7-a80b-727ea6577959] Running
	I0211 03:23:56.088352   74474 system_pods.go:74] duration metric: took 179.787278ms to wait for pod list to return data ...
	I0211 03:23:56.088362   74474 default_sa.go:34] waiting for default service account to be created ...
	I0211 03:23:56.291065   74474 default_sa.go:45] found service account: "default"
	I0211 03:23:56.291104   74474 default_sa.go:55] duration metric: took 202.726139ms for default service account to be created ...
	I0211 03:23:56.291122   74474 system_pods.go:116] waiting for k8s-apps to be running ...
	I0211 03:23:56.488817   74474 system_pods.go:86] 7 kube-system pods found
	I0211 03:23:56.488843   74474 system_pods.go:89] "coredns-668d6bf9bc-ktrqg" [25de257a-811d-450d-9f38-d3cbbe560bc7] Running
	I0211 03:23:56.488849   74474 system_pods.go:89] "etcd-flannel-649359" [446a2b15-8be7-4fbb-9ba5-80ad99efe86f] Running
	I0211 03:23:56.488852   74474 system_pods.go:89] "kube-apiserver-flannel-649359" [3aa2d94e-4bac-4f10-92eb-62c5bc8a9497] Running
	I0211 03:23:56.488856   74474 system_pods.go:89] "kube-controller-manager-flannel-649359" [e38e3569-0f84-46e3-9cb3-d55fc351cf71] Running
	I0211 03:23:56.488859   74474 system_pods.go:89] "kube-proxy-svqjf" [837e961f-4d98-436b-8d7b-1c58fc12c210] Running
	I0211 03:23:56.488862   74474 system_pods.go:89] "kube-scheduler-flannel-649359" [dc8b4f66-a15b-429e-8c5d-564222514190] Running
	I0211 03:23:56.488865   74474 system_pods.go:89] "storage-provisioner" [4006f839-4055-49b7-a80b-727ea6577959] Running
	I0211 03:23:56.488872   74474 system_pods.go:126] duration metric: took 197.742474ms to wait for k8s-apps to be running ...
	I0211 03:23:56.488878   74474 system_svc.go:44] waiting for kubelet service to be running ....
	I0211 03:23:56.488917   74474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 03:23:56.505335   74474 system_svc.go:56] duration metric: took 16.448357ms WaitForService to wait for kubelet
	I0211 03:23:56.505361   74474 kubeadm.go:582] duration metric: took 25.346130872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:23:56.505377   74474 node_conditions.go:102] verifying NodePressure condition ...
	I0211 03:23:56.689125   74474 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0211 03:23:56.689166   74474 node_conditions.go:123] node cpu capacity is 2
	I0211 03:23:56.689183   74474 node_conditions.go:105] duration metric: took 183.800285ms to run NodePressure ...
	I0211 03:23:56.689199   74474 start.go:241] waiting for startup goroutines ...
	I0211 03:23:56.689208   74474 start.go:246] waiting for cluster config update ...
	I0211 03:23:56.689224   74474 start.go:255] writing updated cluster config ...
	I0211 03:23:56.689599   74474 ssh_runner.go:195] Run: rm -f paused
	I0211 03:23:56.738058   74474 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0211 03:23:56.739640   74474 out.go:177] * Done! kubectl is now configured to use "flannel-649359" cluster and "default" namespace by default
	I0211 03:23:56.330378   76224 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 03:23:56.330399   76224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0211 03:23:56.330425   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:56.336620   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:56.337050   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:56.337074   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:56.337308   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:56.337511   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:56.337643   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:56.337805   76224 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa Username:docker}
	I0211 03:23:56.344193   76224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0211 03:23:56.344682   76224 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:56.345240   76224 main.go:141] libmachine: Using API Version  1
	I0211 03:23:56.345257   76224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:56.345755   76224 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:56.345926   76224 main.go:141] libmachine: (bridge-649359) Calling .GetState
	I0211 03:23:56.347412   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:56.347649   76224 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0211 03:23:56.347664   76224 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0211 03:23:56.347680   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:56.350378   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:56.350643   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:56.350662   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:56.350793   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:56.350940   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:56.351037   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:56.351138   76224 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa Username:docker}
	I0211 03:23:56.443520   76224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0211 03:23:56.464811   76224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:23:56.601622   76224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0211 03:23:56.647861   76224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 03:23:56.917727   76224 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0211 03:23:56.917816   76224 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:56.917847   76224 main.go:141] libmachine: (bridge-649359) Calling .Close
	I0211 03:23:56.918167   76224 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:56.918188   76224 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:56.918198   76224 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:56.918207   76224 main.go:141] libmachine: (bridge-649359) Calling .Close
	I0211 03:23:56.918959   76224 main.go:141] libmachine: (bridge-649359) DBG | Closing plugin on server side
	I0211 03:23:56.918996   76224 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:56.919011   76224 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:56.919119   76224 node_ready.go:35] waiting up to 15m0s for node "bridge-649359" to be "Ready" ...
	I0211 03:23:56.932685   76224 node_ready.go:49] node "bridge-649359" has status "Ready":"True"
	I0211 03:23:56.932704   76224 node_ready.go:38] duration metric: took 13.54605ms for node "bridge-649359" to be "Ready" ...
	I0211 03:23:56.932714   76224 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:23:56.951563   76224 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:56.951959   76224 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:56.951976   76224 main.go:141] libmachine: (bridge-649359) Calling .Close
	I0211 03:23:56.952230   76224 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:56.952236   76224 main.go:141] libmachine: (bridge-649359) DBG | Closing plugin on server side
	I0211 03:23:56.952247   76224 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:57.178049   76224 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:57.178075   76224 main.go:141] libmachine: (bridge-649359) Calling .Close
	I0211 03:23:57.178329   76224 main.go:141] libmachine: (bridge-649359) DBG | Closing plugin on server side
	I0211 03:23:57.178352   76224 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:57.178365   76224 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:57.178374   76224 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:57.178381   76224 main.go:141] libmachine: (bridge-649359) Calling .Close
	I0211 03:23:57.178603   76224 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:57.178624   76224 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:57.178629   76224 main.go:141] libmachine: (bridge-649359) DBG | Closing plugin on server side
	I0211 03:23:57.180985   76224 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0211 03:23:57.182315   76224 addons.go:514] duration metric: took 895.081368ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0211 03:23:57.423036   76224 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-649359" context rescaled to 1 replicas
	I0211 03:23:58.957485   76224 pod_ready.go:103] pod "etcd-bridge-649359" in "kube-system" namespace has status "Ready":"False"
	I0211 03:24:01.457551   76224 pod_ready.go:103] pod "etcd-bridge-649359" in "kube-system" namespace has status "Ready":"False"
	I0211 03:24:03.462842   76224 pod_ready.go:103] pod "etcd-bridge-649359" in "kube-system" namespace has status "Ready":"False"
	I0211 03:24:05.957301   76224 pod_ready.go:103] pod "etcd-bridge-649359" in "kube-system" namespace has status "Ready":"False"
	I0211 03:24:06.957208   76224 pod_ready.go:93] pod "etcd-bridge-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:24:06.957232   76224 pod_ready.go:82] duration metric: took 10.005644253s for pod "etcd-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.957244   76224 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.960650   76224 pod_ready.go:93] pod "kube-apiserver-bridge-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:24:06.960668   76224 pod_ready.go:82] duration metric: took 3.416505ms for pod "kube-apiserver-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.960679   76224 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.965014   76224 pod_ready.go:93] pod "kube-controller-manager-bridge-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:24:06.965028   76224 pod_ready.go:82] duration metric: took 4.342119ms for pod "kube-controller-manager-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.965035   76224 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-9q77c" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.969396   76224 pod_ready.go:93] pod "kube-proxy-9q77c" in "kube-system" namespace has status "Ready":"True"
	I0211 03:24:06.969409   76224 pod_ready.go:82] duration metric: took 4.370008ms for pod "kube-proxy-9q77c" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.969416   76224 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.973028   76224 pod_ready.go:93] pod "kube-scheduler-bridge-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:24:06.973041   76224 pod_ready.go:82] duration metric: took 3.620954ms for pod "kube-scheduler-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.973047   76224 pod_ready.go:39] duration metric: took 10.040322007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:24:06.973061   76224 api_server.go:52] waiting for apiserver process to appear ...
	I0211 03:24:06.973101   76224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:24:06.987215   76224 api_server.go:72] duration metric: took 10.699994196s to wait for apiserver process to appear ...
	I0211 03:24:06.987240   76224 api_server.go:88] waiting for apiserver healthz status ...
	I0211 03:24:06.987262   76224 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0211 03:24:06.993172   76224 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0211 03:24:06.994390   76224 api_server.go:141] control plane version: v1.32.1
	I0211 03:24:06.994411   76224 api_server.go:131] duration metric: took 7.16426ms to wait for apiserver health ...
	I0211 03:24:06.994418   76224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0211 03:24:07.157057   76224 system_pods.go:59] 7 kube-system pods found
	I0211 03:24:07.157089   76224 system_pods.go:61] "coredns-668d6bf9bc-jfw64" [c6a15f81-0759-41df-957c-d7ad97cc9a6a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:24:07.157095   76224 system_pods.go:61] "etcd-bridge-649359" [6077ad50-145b-49f1-96b4-ba1fb2c2b33c] Running
	I0211 03:24:07.157101   76224 system_pods.go:61] "kube-apiserver-bridge-649359" [cf9c4983-d1a8-481c-ae23-8867414f715c] Running
	I0211 03:24:07.157105   76224 system_pods.go:61] "kube-controller-manager-bridge-649359" [f85cff22-a57e-45ba-9e4b-1583816a9ccb] Running
	I0211 03:24:07.157109   76224 system_pods.go:61] "kube-proxy-9q77c" [be4d3372-9382-4dbd-a850-5729fa3918a5] Running
	I0211 03:24:07.157112   76224 system_pods.go:61] "kube-scheduler-bridge-649359" [d85c8067-6a92-455f-8eb2-bc0f5e7b2d5c] Running
	I0211 03:24:07.157115   76224 system_pods.go:61] "storage-provisioner" [446d17e1-30af-4afc-86b0-f55654c31967] Running
	I0211 03:24:07.157122   76224 system_pods.go:74] duration metric: took 162.698222ms to wait for pod list to return data ...
	I0211 03:24:07.157128   76224 default_sa.go:34] waiting for default service account to be created ...
	I0211 03:24:07.356681   76224 default_sa.go:45] found service account: "default"
	I0211 03:24:07.356714   76224 default_sa.go:55] duration metric: took 199.579483ms for default service account to be created ...
	I0211 03:24:07.356726   76224 system_pods.go:116] waiting for k8s-apps to be running ...
	I0211 03:24:07.556397   76224 system_pods.go:86] 7 kube-system pods found
	I0211 03:24:07.556432   76224 system_pods.go:89] "coredns-668d6bf9bc-jfw64" [c6a15f81-0759-41df-957c-d7ad97cc9a6a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:24:07.556440   76224 system_pods.go:89] "etcd-bridge-649359" [6077ad50-145b-49f1-96b4-ba1fb2c2b33c] Running
	I0211 03:24:07.556446   76224 system_pods.go:89] "kube-apiserver-bridge-649359" [cf9c4983-d1a8-481c-ae23-8867414f715c] Running
	I0211 03:24:07.556451   76224 system_pods.go:89] "kube-controller-manager-bridge-649359" [f85cff22-a57e-45ba-9e4b-1583816a9ccb] Running
	I0211 03:24:07.556456   76224 system_pods.go:89] "kube-proxy-9q77c" [be4d3372-9382-4dbd-a850-5729fa3918a5] Running
	I0211 03:24:07.556460   76224 system_pods.go:89] "kube-scheduler-bridge-649359" [d85c8067-6a92-455f-8eb2-bc0f5e7b2d5c] Running
	I0211 03:24:07.556464   76224 system_pods.go:89] "storage-provisioner" [446d17e1-30af-4afc-86b0-f55654c31967] Running
	I0211 03:24:07.556471   76224 system_pods.go:126] duration metric: took 199.7395ms to wait for k8s-apps to be running ...
	I0211 03:24:07.556478   76224 system_svc.go:44] waiting for kubelet service to be running ....
	I0211 03:24:07.556519   76224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 03:24:07.571047   76224 system_svc.go:56] duration metric: took 14.559978ms WaitForService to wait for kubelet
	I0211 03:24:07.571081   76224 kubeadm.go:582] duration metric: took 11.283863044s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:24:07.571111   76224 node_conditions.go:102] verifying NodePressure condition ...
	I0211 03:24:07.756253   76224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0211 03:24:07.756287   76224 node_conditions.go:123] node cpu capacity is 2
	I0211 03:24:07.756301   76224 node_conditions.go:105] duration metric: took 185.167323ms to run NodePressure ...
	I0211 03:24:07.756318   76224 start.go:241] waiting for startup goroutines ...
	I0211 03:24:07.756328   76224 start.go:246] waiting for cluster config update ...
	I0211 03:24:07.756342   76224 start.go:255] writing updated cluster config ...
	I0211 03:24:07.756642   76224 ssh_runner.go:195] Run: rm -f paused
	I0211 03:24:07.803293   76224 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0211 03:24:07.805468   76224 out.go:177] * Done! kubectl is now configured to use "bridge-649359" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.872897063Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739244689872856353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c91a0bee-b1f0-4757-a7cc-130e6b511dff name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.873294494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ff204835-2fbc-49fc-acec-5f67a929e9e8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.873343154Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ff204835-2fbc-49fc-acec-5f67a929e9e8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.873377606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ff204835-2fbc-49fc-acec-5f67a929e9e8 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.909774699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1623a8ba-02cd-40ad-a792-b80dbd26a3ca name=/runtime.v1.RuntimeService/Version
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.909845830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1623a8ba-02cd-40ad-a792-b80dbd26a3ca name=/runtime.v1.RuntimeService/Version
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.910810610Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db22ebf2-96a5-4227-ba70-3a0a792ede90 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.911187025Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739244689911165734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db22ebf2-96a5-4227-ba70-3a0a792ede90 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.911898333Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2c356be-584e-4451-beb0-04f4000333cf name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.911948594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2c356be-584e-4451-beb0-04f4000333cf name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.911978551Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c2c356be-584e-4451-beb0-04f4000333cf name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.945225637Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=af14e120-7299-4d73-b3a3-246975b1a0e7 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.945300230Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af14e120-7299-4d73-b3a3-246975b1a0e7 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.946238658Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b42af80c-d357-4e04-9146-91735e5eb00d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.946652962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739244689946629167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b42af80c-d357-4e04-9146-91735e5eb00d name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.947314399Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=803d0e04-c5fa-43e6-983a-186cb8f335e5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.947373792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=803d0e04-c5fa-43e6-983a-186cb8f335e5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.947405683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=803d0e04-c5fa-43e6-983a-186cb8f335e5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.978088266Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ddd4a71-1991-43ba-bf3d-1858ec6506d8 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.978160028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ddd4a71-1991-43ba-bf3d-1858ec6506d8 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.979187006Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=230334d6-889e-4ea3-8303-3fd72dfdfbd5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.979554656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739244689979529204,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=230334d6-889e-4ea3-8303-3fd72dfdfbd5 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.980225209Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa63c3d5-51ea-470a-8995-ce7f7d40cd19 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.980275493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa63c3d5-51ea-470a-8995-ce7f7d40cd19 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:31:29 old-k8s-version-244815 crio[625]: time="2025-02-11 03:31:29.980317525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fa63c3d5-51ea-470a-8995-ce7f7d40cd19 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb11 03:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053978] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039203] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.074931] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.949355] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.579835] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.978569] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +0.065488] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058561] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.199108] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.179272] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.277363] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +6.347509] systemd-fstab-generator[869]: Ignoring "noauto" option for root device
	[  +0.058497] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.802316] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[ +12.357601] kauditd_printk_skb: 46 callbacks suppressed
	[Feb11 03:18] systemd-fstab-generator[5029]: Ignoring "noauto" option for root device
	[Feb11 03:20] systemd-fstab-generator[5308]: Ignoring "noauto" option for root device
	[  +0.093052] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 03:31:30 up 17 min,  0 users,  load average: 0.00, 0.05, 0.07
	Linux old-k8s-version-244815 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0008ca6f0)
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b2def0, 0x4f0ac20, 0xc000ae4f50, 0x1, 0xc0001020c0)
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d8d20, 0xc0001020c0)
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c64240, 0xc000ac7e40)
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6473]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Feb 11 03:31:29 old-k8s-version-244815 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 11 03:31:29 old-k8s-version-244815 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 11 03:31:29 old-k8s-version-244815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Feb 11 03:31:29 old-k8s-version-244815 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 11 03:31:29 old-k8s-version-244815 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6521]: I0211 03:31:29.918920    6521 server.go:416] Version: v1.20.0
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6521]: I0211 03:31:29.919170    6521 server.go:837] Client rotation is on, will bootstrap in background
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6521]: I0211 03:31:29.921017    6521 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6521]: W0211 03:31:29.921972    6521 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 11 03:31:29 old-k8s-version-244815 kubelet[6521]: I0211 03:31:29.922104    6521 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-244815 -n old-k8s-version-244815
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 2 (214.910909ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-244815" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (541.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (374.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:31:39.595039   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:31:40.617243   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:31:52.116508   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:32:07.295042   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:32:19.289940   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:32:23.755008   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:32:45.863226   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:33:13.565822   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/custom-flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:33:15.922255   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:33:43.625445   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/enable-default-cni-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:33:56.757686   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:34:02.578348   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:34:08.255730   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:34:16.211109   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:34:20.064865   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:34:24.458997   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:34:30.819211   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/default-k8s-diff-port-697681/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:34:35.958256   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:35:43.128525   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:35:58.872083   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/kindnet-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:36:39.594987   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/calico-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
E0211 03:37:23.754897   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.206:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.206:8443: connect: connection refused
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-244815 -n old-k8s-version-244815
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 2 (211.300688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "old-k8s-version-244815" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-244815 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-244815 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.091µs)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-244815 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 2 (205.943983ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-244815 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-649359 sudo iptables                       | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | -t nat -L -n -v                                      |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl status kubelet --all                       |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl cat kubelet                                |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | journalctl -xeu kubelet --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC |                     |
	|         | systemctl status docker --all                        |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl cat docker                                 |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo docker                         | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC |                     |
	|         | systemctl status cri-docker                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl cat cri-docker                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC |                     |
	|         | systemctl status containerd                          |               |         |         |                     |                     |
	|         | --all --full --no-pager                              |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl cat containerd                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo cat                            | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | containerd config dump                               |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl status crio --all                          |               |         |         |                     |                     |
	|         | --full --no-pager                                    |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo                                | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | systemctl cat crio --no-pager                        |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo find                           | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p bridge-649359 sudo crio                           | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p bridge-649359                                     | bridge-649359 | jenkins | v1.35.0 | 11 Feb 25 03:24 UTC | 11 Feb 25 03:24 UTC |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 03:23:13
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 03:23:13.081035   76224 out.go:345] Setting OutFile to fd 1 ...
	I0211 03:23:13.081187   76224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:23:13.081200   76224 out.go:358] Setting ErrFile to fd 2...
	I0211 03:23:13.081207   76224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:23:13.081496   76224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 03:23:13.082126   76224 out.go:352] Setting JSON to false
	I0211 03:23:13.083210   76224 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7544,"bootTime":1739236649,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 03:23:13.083303   76224 start.go:139] virtualization: kvm guest
	I0211 03:23:13.085425   76224 out.go:177] * [bridge-649359] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 03:23:13.087070   76224 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 03:23:13.087088   76224 notify.go:220] Checking for updates...
	I0211 03:23:13.089378   76224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 03:23:13.090807   76224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:23:13.091907   76224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:23:13.093076   76224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 03:23:13.094188   76224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 03:23:13.095667   76224 config.go:182] Loaded profile config "enable-default-cni-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:23:13.095778   76224 config.go:182] Loaded profile config "flannel-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:23:13.095889   76224 config.go:182] Loaded profile config "old-k8s-version-244815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0211 03:23:13.095994   76224 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 03:23:13.136607   76224 out.go:177] * Using the kvm2 driver based on user configuration
	I0211 03:23:13.137908   76224 start.go:297] selected driver: kvm2
	I0211 03:23:13.137925   76224 start.go:901] validating driver "kvm2" against <nil>
	I0211 03:23:13.137936   76224 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 03:23:13.138755   76224 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:23:13.138832   76224 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20400-12456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0211 03:23:13.155651   76224 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0211 03:23:13.155732   76224 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0211 03:23:13.156061   76224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:23:13.156101   76224 cni.go:84] Creating CNI manager for "bridge"
	I0211 03:23:13.156111   76224 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0211 03:23:13.156178   76224 start.go:340] cluster config:
	{Name:bridge-649359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I0211 03:23:13.156321   76224 iso.go:125] acquiring lock: {Name:mkf866c6e52b4efa55cc59a9f329105471716f9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 03:23:13.158222   76224 out.go:177] * Starting "bridge-649359" primary control-plane node in "bridge-649359" cluster
	I0211 03:23:13.159578   76224 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 03:23:13.159638   76224 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0211 03:23:13.159650   76224 cache.go:56] Caching tarball of preloaded images
	I0211 03:23:13.159745   76224 preload.go:172] Found /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0211 03:23:13.159757   76224 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0211 03:23:13.159900   76224 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/config.json ...
	I0211 03:23:13.159922   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/config.json: {Name:mk2f137687eec59fed010b0831cd63b8499c2c53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:13.160066   76224 start.go:360] acquireMachinesLock for bridge-649359: {Name:mk0cbf79bfabdb28d0a301765db34c154a72eff0 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0211 03:23:13.160096   76224 start.go:364] duration metric: took 17.084µs to acquireMachinesLock for "bridge-649359"
	I0211 03:23:13.160114   76224 start.go:93] Provisioning new machine with config: &{Name:bridge-649359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-649359 Namespa
ce:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 03:23:13.160191   76224 start.go:125] createHost starting for "" (driver="kvm2")
	I0211 03:23:09.983015   73602 pod_ready.go:103] pod "coredns-668d6bf9bc-hvcxh" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:10.983987   73602 pod_ready.go:93] pod "coredns-668d6bf9bc-hvcxh" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:10.984014   73602 pod_ready.go:82] duration metric: took 5.507517178s for pod "coredns-668d6bf9bc-hvcxh" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:10.984026   73602 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-lszj7" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:12.497549   73602 pod_ready.go:98] pod "coredns-668d6bf9bc-lszj7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:12 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.227 HostIPs:[{IP:192.168.50
.227}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-02-11 03:23:05 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-02-11 03:23:06 +0000 UTC,FinishedAt:2025-02-11 03:23:12 +0000 UTC,ContainerID:cri-o://63c8a21527bb528f6980b3e58bd03f4a8eac765b18e787634a7adacf7c5b7e12,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://63c8a21527bb528f6980b3e58bd03f4a8eac765b18e787634a7adacf7c5b7e12 Started:0xc001c4f680 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001ff4170} {Name:kube-api-access-l7qth MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001ff4180}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0211 03:23:12.497589   73602 pod_ready.go:82] duration metric: took 1.513552822s for pod "coredns-668d6bf9bc-lszj7" in "kube-system" namespace to be "Ready" ...
	E0211 03:23:12.497608   73602 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-668d6bf9bc-lszj7" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:12 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2025-02-11 03:23:05 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.5
0.227 HostIPs:[{IP:192.168.50.227}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2025-02-11 03:23:05 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2025-02-11 03:23:06 +0000 UTC,FinishedAt:2025-02-11 03:23:12 +0000 UTC,ContainerID:cri-o://63c8a21527bb528f6980b3e58bd03f4a8eac765b18e787634a7adacf7c5b7e12,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://63c8a21527bb528f6980b3e58bd03f4a8eac765b18e787634a7adacf7c5b7e12 Started:0xc001c4f680 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001ff4170} {Name:kube-api-access-l7qth MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001ff4180}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0211 03:23:12.497632   73602 pod_ready.go:79] waiting up to 15m0s for pod "etcd-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.503935   73602 pod_ready.go:93] pod "etcd-enable-default-cni-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:14.503963   73602 pod_ready.go:82] duration metric: took 2.006319933s for pod "etcd-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.503988   73602 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.508947   73602 pod_ready.go:93] pod "kube-apiserver-enable-default-cni-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:14.508976   73602 pod_ready.go:82] duration metric: took 4.979657ms for pod "kube-apiserver-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.508989   73602 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.517657   73602 pod_ready.go:93] pod "kube-controller-manager-enable-default-cni-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:14.517691   73602 pod_ready.go:82] duration metric: took 8.694109ms for pod "kube-controller-manager-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.517708   73602 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-ts7wz" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.525934   73602 pod_ready.go:93] pod "kube-proxy-ts7wz" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:14.525957   73602 pod_ready.go:82] duration metric: took 8.240149ms for pod "kube-proxy-ts7wz" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.525970   73602 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.580286   73602 pod_ready.go:93] pod "kube-scheduler-enable-default-cni-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:14.580312   73602 pod_ready.go:82] duration metric: took 54.332262ms for pod "kube-scheduler-enable-default-cni-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:14.580324   73602 pod_ready.go:39] duration metric: took 9.112283658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:23:14.580342   73602 api_server.go:52] waiting for apiserver process to appear ...
	I0211 03:23:14.580402   73602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:23:14.596650   73602 api_server.go:72] duration metric: took 9.490364s to wait for apiserver process to appear ...
	I0211 03:23:14.596678   73602 api_server.go:88] waiting for apiserver healthz status ...
	I0211 03:23:14.596699   73602 api_server.go:253] Checking apiserver healthz at https://192.168.50.227:8443/healthz ...
	I0211 03:23:14.602310   73602 api_server.go:279] https://192.168.50.227:8443/healthz returned 200:
	ok
	I0211 03:23:14.603319   73602 api_server.go:141] control plane version: v1.32.1
	I0211 03:23:14.603343   73602 api_server.go:131] duration metric: took 6.658485ms to wait for apiserver health ...
	I0211 03:23:14.603353   73602 system_pods.go:43] waiting for kube-system pods to appear ...
	I0211 03:23:14.781953   73602 system_pods.go:59] 7 kube-system pods found
	I0211 03:23:14.781995   73602 system_pods.go:61] "coredns-668d6bf9bc-hvcxh" [09bf1572-919d-44aa-9ec7-8879ade61727] Running
	I0211 03:23:14.782004   73602 system_pods.go:61] "etcd-enable-default-cni-649359" [448e08a5-abac-4a6d-8b4b-e22c331a9fe6] Running
	I0211 03:23:14.782011   73602 system_pods.go:61] "kube-apiserver-enable-default-cni-649359" [3f99d598-6375-4fbe-9003-e5fff13e8393] Running
	I0211 03:23:14.782018   73602 system_pods.go:61] "kube-controller-manager-enable-default-cni-649359" [c7b62bcf-3720-4b14-91de-2a63ea303ea9] Running
	I0211 03:23:14.782023   73602 system_pods.go:61] "kube-proxy-ts7wz" [63d1bb7d-fd8d-49bc-a22f-8df07e7d4e40] Running
	I0211 03:23:14.782030   73602 system_pods.go:61] "kube-scheduler-enable-default-cni-649359" [661b33bc-c632-495f-bda7-5cecf5551b1a] Running
	I0211 03:23:14.782037   73602 system_pods.go:61] "storage-provisioner" [5cd25b79-78ab-4fe4-956b-2fc2424efd9d] Running
	I0211 03:23:14.782046   73602 system_pods.go:74] duration metric: took 178.684869ms to wait for pod list to return data ...
	I0211 03:23:14.782062   73602 default_sa.go:34] waiting for default service account to be created ...
	I0211 03:23:14.982953   73602 default_sa.go:45] found service account: "default"
	I0211 03:23:14.982984   73602 default_sa.go:55] duration metric: took 200.913238ms for default service account to be created ...
	I0211 03:23:14.982997   73602 system_pods.go:116] waiting for k8s-apps to be running ...
	I0211 03:23:15.182233   73602 system_pods.go:86] 7 kube-system pods found
	I0211 03:23:15.182269   73602 system_pods.go:89] "coredns-668d6bf9bc-hvcxh" [09bf1572-919d-44aa-9ec7-8879ade61727] Running
	I0211 03:23:15.182281   73602 system_pods.go:89] "etcd-enable-default-cni-649359" [448e08a5-abac-4a6d-8b4b-e22c331a9fe6] Running
	I0211 03:23:15.182288   73602 system_pods.go:89] "kube-apiserver-enable-default-cni-649359" [3f99d598-6375-4fbe-9003-e5fff13e8393] Running
	I0211 03:23:15.182294   73602 system_pods.go:89] "kube-controller-manager-enable-default-cni-649359" [c7b62bcf-3720-4b14-91de-2a63ea303ea9] Running
	I0211 03:23:15.182299   73602 system_pods.go:89] "kube-proxy-ts7wz" [63d1bb7d-fd8d-49bc-a22f-8df07e7d4e40] Running
	I0211 03:23:15.182305   73602 system_pods.go:89] "kube-scheduler-enable-default-cni-649359" [661b33bc-c632-495f-bda7-5cecf5551b1a] Running
	I0211 03:23:15.182314   73602 system_pods.go:89] "storage-provisioner" [5cd25b79-78ab-4fe4-956b-2fc2424efd9d] Running
	I0211 03:23:15.182325   73602 system_pods.go:126] duration metric: took 199.318436ms to wait for k8s-apps to be running ...
	I0211 03:23:15.182339   73602 system_svc.go:44] waiting for kubelet service to be running ....
	I0211 03:23:15.182396   73602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 03:23:15.197122   73602 system_svc.go:56] duration metric: took 14.775768ms WaitForService to wait for kubelet
	I0211 03:23:15.197147   73602 kubeadm.go:582] duration metric: took 10.090865662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:23:15.197175   73602 node_conditions.go:102] verifying NodePressure condition ...
	I0211 03:23:15.384040   73602 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0211 03:23:15.384075   73602 node_conditions.go:123] node cpu capacity is 2
	I0211 03:23:15.384091   73602 node_conditions.go:105] duration metric: took 186.907093ms to run NodePressure ...
	I0211 03:23:15.384116   73602 start.go:241] waiting for startup goroutines ...
	I0211 03:23:15.384132   73602 start.go:246] waiting for cluster config update ...
	I0211 03:23:15.384147   73602 start.go:255] writing updated cluster config ...
	I0211 03:23:15.384497   73602 ssh_runner.go:195] Run: rm -f paused
	I0211 03:23:15.442411   73602 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0211 03:23:15.445085   73602 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-649359" cluster and "default" namespace by default
	I0211 03:23:15.195574   74474 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.382506392s)
	I0211 03:23:15.195612   74474 crio.go:469] duration metric: took 2.382639633s to extract the tarball
	I0211 03:23:15.195621   74474 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0211 03:23:15.233474   74474 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:23:15.276475   74474 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 03:23:15.276501   74474 cache_images.go:84] Images are preloaded, skipping loading
	I0211 03:23:15.276510   74474 kubeadm.go:934] updating node { 192.168.72.59 8443 v1.32.1 crio true true} ...
	I0211 03:23:15.276617   74474 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-649359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.59
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:flannel-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0211 03:23:15.276679   74474 ssh_runner.go:195] Run: crio config
	I0211 03:23:15.329421   74474 cni.go:84] Creating CNI manager for "flannel"
	I0211 03:23:15.329449   74474 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 03:23:15.329503   74474 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.59 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-649359 NodeName:flannel-649359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.59"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.59 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0211 03:23:15.329667   74474 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.59
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-649359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.59"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.59"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 03:23:15.329748   74474 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0211 03:23:15.341419   74474 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 03:23:15.341514   74474 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 03:23:15.351809   74474 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0211 03:23:15.368240   74474 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 03:23:15.388068   74474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0211 03:23:15.406303   74474 ssh_runner.go:195] Run: grep 192.168.72.59	control-plane.minikube.internal$ /etc/hosts
	I0211 03:23:15.410604   74474 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.59	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 03:23:15.423051   74474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:23:15.583428   74474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:23:15.611217   74474 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359 for IP: 192.168.72.59
	I0211 03:23:15.611246   74474 certs.go:194] generating shared ca certs ...
	I0211 03:23:15.611270   74474 certs.go:226] acquiring lock for ca certs: {Name:mk14e70e4f3b98aff6eac535114852cc1d70eb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:15.611470   74474 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key
	I0211 03:23:15.611537   74474 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key
	I0211 03:23:15.611554   74474 certs.go:256] generating profile certs ...
	I0211 03:23:15.611652   74474 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.key
	I0211 03:23:15.611677   74474 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt with IP's: []
	I0211 03:23:15.995256   74474 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt ...
	I0211 03:23:15.995283   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.crt: {Name:mkbdf2ec339d7105059cec29fe5c2f5bd0dc1412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:15.995430   74474 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.key ...
	I0211 03:23:15.995440   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/client.key: {Name:mk7c5762a04702befc810b6a06ee4f9739e5f86a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:15.995512   74474 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.key.0cce74ff
	I0211 03:23:15.995527   74474 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.crt.0cce74ff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.59]
	I0211 03:23:16.130389   74474 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.crt.0cce74ff ...
	I0211 03:23:16.130415   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.crt.0cce74ff: {Name:mke0717e04de367ea0b393259377ff7fe47ea1c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:16.130570   74474 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.key.0cce74ff ...
	I0211 03:23:16.130582   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.key.0cce74ff: {Name:mk409a3ee4e8749e5e84086d3851197f78ce022a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:16.130647   74474 certs.go:381] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.crt.0cce74ff -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.crt
	I0211 03:23:16.130725   74474 certs.go:385] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.key.0cce74ff -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.key
	I0211 03:23:16.130786   74474 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.key
	I0211 03:23:16.130801   74474 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.crt with IP's: []
	I0211 03:23:16.490091   74474 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.crt ...
	I0211 03:23:16.490127   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.crt: {Name:mke9f0321496c7ad0c90bde87c49c02b8699bb9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:16.490314   74474 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.key ...
	I0211 03:23:16.490332   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.key: {Name:mk7e32c1f4c9365545d3195e51a54f0c9815aad4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:16.490528   74474 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem (1338 bytes)
	W0211 03:23:16.490565   74474 certs.go:480] ignoring /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645_empty.pem, impossibly tiny 0 bytes
	I0211 03:23:16.490576   74474 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem (1675 bytes)
	I0211 03:23:16.490598   74474 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem (1078 bytes)
	I0211 03:23:16.490618   74474 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem (1123 bytes)
	I0211 03:23:16.490644   74474 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem (1679 bytes)
	I0211 03:23:16.490684   74474 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:23:16.491315   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 03:23:16.523704   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 03:23:16.549967   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 03:23:16.575117   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0211 03:23:16.608189   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0211 03:23:16.636645   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0211 03:23:16.659645   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 03:23:16.715276   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/flannel-649359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0211 03:23:16.742104   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem --> /usr/share/ca-certificates/19645.pem (1338 bytes)
	I0211 03:23:16.766537   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /usr/share/ca-certificates/196452.pem (1708 bytes)
	I0211 03:23:16.790666   74474 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 03:23:16.814234   74474 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 03:23:16.832059   74474 ssh_runner.go:195] Run: openssl version
	I0211 03:23:16.837672   74474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19645.pem && ln -fs /usr/share/ca-certificates/19645.pem /etc/ssl/certs/19645.pem"
	I0211 03:23:16.848227   74474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19645.pem
	I0211 03:23:16.852664   74474 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 11 02:09 /usr/share/ca-certificates/19645.pem
	I0211 03:23:16.852725   74474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19645.pem
	I0211 03:23:16.858666   74474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19645.pem /etc/ssl/certs/51391683.0"
	I0211 03:23:16.869139   74474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/196452.pem && ln -fs /usr/share/ca-certificates/196452.pem /etc/ssl/certs/196452.pem"
	I0211 03:23:16.879767   74474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/196452.pem
	I0211 03:23:16.884005   74474 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 11 02:09 /usr/share/ca-certificates/196452.pem
	I0211 03:23:16.884048   74474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/196452.pem
	I0211 03:23:16.889414   74474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/196452.pem /etc/ssl/certs/3ec20f2e.0"
	I0211 03:23:16.903168   74474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 03:23:16.916947   74474 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:23:16.922330   74474 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:23:16.922400   74474 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:23:16.928123   74474 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 03:23:16.939389   74474 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 03:23:16.943085   74474 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0211 03:23:16.943141   74474 kubeadm.go:392] StartCluster: {Name:flannel-649359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:flannel-649359 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:23:16.943206   74474 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 03:23:16.943242   74474 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 03:23:16.981955   74474 cri.go:89] found id: ""
	I0211 03:23:16.982025   74474 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0211 03:23:16.993058   74474 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 03:23:17.002019   74474 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 03:23:17.010942   74474 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 03:23:17.010966   74474 kubeadm.go:157] found existing configuration files:
	
	I0211 03:23:17.011017   74474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 03:23:17.019432   74474 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 03:23:17.019497   74474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 03:23:17.029653   74474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 03:23:17.039298   74474 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 03:23:17.039360   74474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 03:23:17.049050   74474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 03:23:17.058831   74474 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 03:23:17.058945   74474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 03:23:17.068991   74474 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 03:23:17.078752   74474 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 03:23:17.078811   74474 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 03:23:17.089006   74474 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0211 03:23:17.304026   74474 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 03:23:13.167417   76224 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0211 03:23:13.167638   76224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:13.167696   76224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:13.189234   76224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42317
	I0211 03:23:13.189724   76224 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:13.190382   76224 main.go:141] libmachine: Using API Version  1
	I0211 03:23:13.190408   76224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:13.190728   76224 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:13.190992   76224 main.go:141] libmachine: (bridge-649359) Calling .GetMachineName
	I0211 03:23:13.191140   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:13.191264   76224 start.go:159] libmachine.API.Create for "bridge-649359" (driver="kvm2")
	I0211 03:23:13.191286   76224 client.go:168] LocalClient.Create starting
	I0211 03:23:13.191315   76224 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem
	I0211 03:23:13.191349   76224 main.go:141] libmachine: Decoding PEM data...
	I0211 03:23:13.191362   76224 main.go:141] libmachine: Parsing certificate...
	I0211 03:23:13.191421   76224 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem
	I0211 03:23:13.191440   76224 main.go:141] libmachine: Decoding PEM data...
	I0211 03:23:13.191454   76224 main.go:141] libmachine: Parsing certificate...
	I0211 03:23:13.191471   76224 main.go:141] libmachine: Running pre-create checks...
	I0211 03:23:13.191478   76224 main.go:141] libmachine: (bridge-649359) Calling .PreCreateCheck
	I0211 03:23:13.191910   76224 main.go:141] libmachine: (bridge-649359) Calling .GetConfigRaw
	I0211 03:23:13.192281   76224 main.go:141] libmachine: Creating machine...
	I0211 03:23:13.192294   76224 main.go:141] libmachine: (bridge-649359) Calling .Create
	I0211 03:23:13.192443   76224 main.go:141] libmachine: (bridge-649359) creating KVM machine...
	I0211 03:23:13.192454   76224 main.go:141] libmachine: (bridge-649359) creating network...
	I0211 03:23:13.193881   76224 main.go:141] libmachine: (bridge-649359) DBG | found existing default KVM network
	I0211 03:23:13.195029   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.194906   76257 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:87:bb:4d} reservation:<nil>}
	I0211 03:23:13.195938   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.195871   76257 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:b5:96} reservation:<nil>}
	I0211 03:23:13.197691   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.197624   76257 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000187c60}
	I0211 03:23:13.197851   76224 main.go:141] libmachine: (bridge-649359) DBG | created network xml: 
	I0211 03:23:13.197868   76224 main.go:141] libmachine: (bridge-649359) DBG | <network>
	I0211 03:23:13.197879   76224 main.go:141] libmachine: (bridge-649359) DBG |   <name>mk-bridge-649359</name>
	I0211 03:23:13.197894   76224 main.go:141] libmachine: (bridge-649359) DBG |   <dns enable='no'/>
	I0211 03:23:13.197905   76224 main.go:141] libmachine: (bridge-649359) DBG |   
	I0211 03:23:13.197918   76224 main.go:141] libmachine: (bridge-649359) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0211 03:23:13.197931   76224 main.go:141] libmachine: (bridge-649359) DBG |     <dhcp>
	I0211 03:23:13.197943   76224 main.go:141] libmachine: (bridge-649359) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0211 03:23:13.197954   76224 main.go:141] libmachine: (bridge-649359) DBG |     </dhcp>
	I0211 03:23:13.197961   76224 main.go:141] libmachine: (bridge-649359) DBG |   </ip>
	I0211 03:23:13.197974   76224 main.go:141] libmachine: (bridge-649359) DBG |   
	I0211 03:23:13.197984   76224 main.go:141] libmachine: (bridge-649359) DBG | </network>
	I0211 03:23:13.197996   76224 main.go:141] libmachine: (bridge-649359) DBG | 
	I0211 03:23:13.204135   76224 main.go:141] libmachine: (bridge-649359) DBG | trying to create private KVM network mk-bridge-649359 192.168.61.0/24...
	I0211 03:23:13.289762   76224 main.go:141] libmachine: (bridge-649359) DBG | private KVM network mk-bridge-649359 192.168.61.0/24 created
	I0211 03:23:13.289795   76224 main.go:141] libmachine: (bridge-649359) setting up store path in /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359 ...
	I0211 03:23:13.289809   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.289718   76257 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:23:13.289828   76224 main.go:141] libmachine: (bridge-649359) building disk image from file:///home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0211 03:23:13.289992   76224 main.go:141] libmachine: (bridge-649359) Downloading /home/jenkins/minikube-integration/20400-12456/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0211 03:23:13.568686   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.568510   76257 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa...
	I0211 03:23:13.673673   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.673537   76257 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/bridge-649359.rawdisk...
	I0211 03:23:13.673703   76224 main.go:141] libmachine: (bridge-649359) DBG | Writing magic tar header
	I0211 03:23:13.673718   76224 main.go:141] libmachine: (bridge-649359) DBG | Writing SSH key tar header
	I0211 03:23:13.673734   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:13.673652   76257 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359 ...
	I0211 03:23:13.673808   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359
	I0211 03:23:13.673841   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube/machines
	I0211 03:23:13.673873   76224 main.go:141] libmachine: (bridge-649359) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359 (perms=drwx------)
	I0211 03:23:13.673892   76224 main.go:141] libmachine: (bridge-649359) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube/machines (perms=drwxr-xr-x)
	I0211 03:23:13.673919   76224 main.go:141] libmachine: (bridge-649359) setting executable bit set on /home/jenkins/minikube-integration/20400-12456/.minikube (perms=drwxr-xr-x)
	I0211 03:23:13.673932   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:23:13.673948   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration/20400-12456
	I0211 03:23:13.673956   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home/jenkins/minikube-integration
	I0211 03:23:13.673965   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home/jenkins
	I0211 03:23:13.673972   76224 main.go:141] libmachine: (bridge-649359) DBG | checking permissions on dir: /home
	I0211 03:23:13.673982   76224 main.go:141] libmachine: (bridge-649359) DBG | skipping /home - not owner
	I0211 03:23:13.674028   76224 main.go:141] libmachine: (bridge-649359) setting executable bit set on /home/jenkins/minikube-integration/20400-12456 (perms=drwxrwxr-x)
	I0211 03:23:13.674047   76224 main.go:141] libmachine: (bridge-649359) setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0211 03:23:13.674064   76224 main.go:141] libmachine: (bridge-649359) setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0211 03:23:13.674074   76224 main.go:141] libmachine: (bridge-649359) creating domain...
	I0211 03:23:13.675082   76224 main.go:141] libmachine: (bridge-649359) define libvirt domain using xml: 
	I0211 03:23:13.675104   76224 main.go:141] libmachine: (bridge-649359) <domain type='kvm'>
	I0211 03:23:13.675140   76224 main.go:141] libmachine: (bridge-649359)   <name>bridge-649359</name>
	I0211 03:23:13.675166   76224 main.go:141] libmachine: (bridge-649359)   <memory unit='MiB'>3072</memory>
	I0211 03:23:13.675192   76224 main.go:141] libmachine: (bridge-649359)   <vcpu>2</vcpu>
	I0211 03:23:13.675228   76224 main.go:141] libmachine: (bridge-649359)   <features>
	I0211 03:23:13.675245   76224 main.go:141] libmachine: (bridge-649359)     <acpi/>
	I0211 03:23:13.675256   76224 main.go:141] libmachine: (bridge-649359)     <apic/>
	I0211 03:23:13.675282   76224 main.go:141] libmachine: (bridge-649359)     <pae/>
	I0211 03:23:13.675302   76224 main.go:141] libmachine: (bridge-649359)     
	I0211 03:23:13.675312   76224 main.go:141] libmachine: (bridge-649359)   </features>
	I0211 03:23:13.675331   76224 main.go:141] libmachine: (bridge-649359)   <cpu mode='host-passthrough'>
	I0211 03:23:13.675340   76224 main.go:141] libmachine: (bridge-649359)   
	I0211 03:23:13.675348   76224 main.go:141] libmachine: (bridge-649359)   </cpu>
	I0211 03:23:13.675356   76224 main.go:141] libmachine: (bridge-649359)   <os>
	I0211 03:23:13.675364   76224 main.go:141] libmachine: (bridge-649359)     <type>hvm</type>
	I0211 03:23:13.675371   76224 main.go:141] libmachine: (bridge-649359)     <boot dev='cdrom'/>
	I0211 03:23:13.675381   76224 main.go:141] libmachine: (bridge-649359)     <boot dev='hd'/>
	I0211 03:23:13.675390   76224 main.go:141] libmachine: (bridge-649359)     <bootmenu enable='no'/>
	I0211 03:23:13.675401   76224 main.go:141] libmachine: (bridge-649359)   </os>
	I0211 03:23:13.675408   76224 main.go:141] libmachine: (bridge-649359)   <devices>
	I0211 03:23:13.675429   76224 main.go:141] libmachine: (bridge-649359)     <disk type='file' device='cdrom'>
	I0211 03:23:13.675449   76224 main.go:141] libmachine: (bridge-649359)       <source file='/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/boot2docker.iso'/>
	I0211 03:23:13.675463   76224 main.go:141] libmachine: (bridge-649359)       <target dev='hdc' bus='scsi'/>
	I0211 03:23:13.675473   76224 main.go:141] libmachine: (bridge-649359)       <readonly/>
	I0211 03:23:13.675490   76224 main.go:141] libmachine: (bridge-649359)     </disk>
	I0211 03:23:13.675502   76224 main.go:141] libmachine: (bridge-649359)     <disk type='file' device='disk'>
	I0211 03:23:13.675516   76224 main.go:141] libmachine: (bridge-649359)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0211 03:23:13.675534   76224 main.go:141] libmachine: (bridge-649359)       <source file='/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/bridge-649359.rawdisk'/>
	I0211 03:23:13.675553   76224 main.go:141] libmachine: (bridge-649359)       <target dev='hda' bus='virtio'/>
	I0211 03:23:13.675563   76224 main.go:141] libmachine: (bridge-649359)     </disk>
	I0211 03:23:13.675572   76224 main.go:141] libmachine: (bridge-649359)     <interface type='network'>
	I0211 03:23:13.675583   76224 main.go:141] libmachine: (bridge-649359)       <source network='mk-bridge-649359'/>
	I0211 03:23:13.675593   76224 main.go:141] libmachine: (bridge-649359)       <model type='virtio'/>
	I0211 03:23:13.675602   76224 main.go:141] libmachine: (bridge-649359)     </interface>
	I0211 03:23:13.675611   76224 main.go:141] libmachine: (bridge-649359)     <interface type='network'>
	I0211 03:23:13.675621   76224 main.go:141] libmachine: (bridge-649359)       <source network='default'/>
	I0211 03:23:13.675629   76224 main.go:141] libmachine: (bridge-649359)       <model type='virtio'/>
	I0211 03:23:13.675638   76224 main.go:141] libmachine: (bridge-649359)     </interface>
	I0211 03:23:13.675647   76224 main.go:141] libmachine: (bridge-649359)     <serial type='pty'>
	I0211 03:23:13.675656   76224 main.go:141] libmachine: (bridge-649359)       <target port='0'/>
	I0211 03:23:13.675663   76224 main.go:141] libmachine: (bridge-649359)     </serial>
	I0211 03:23:13.675673   76224 main.go:141] libmachine: (bridge-649359)     <console type='pty'>
	I0211 03:23:13.675682   76224 main.go:141] libmachine: (bridge-649359)       <target type='serial' port='0'/>
	I0211 03:23:13.675692   76224 main.go:141] libmachine: (bridge-649359)     </console>
	I0211 03:23:13.675700   76224 main.go:141] libmachine: (bridge-649359)     <rng model='virtio'>
	I0211 03:23:13.675711   76224 main.go:141] libmachine: (bridge-649359)       <backend model='random'>/dev/random</backend>
	I0211 03:23:13.675719   76224 main.go:141] libmachine: (bridge-649359)     </rng>
	I0211 03:23:13.675728   76224 main.go:141] libmachine: (bridge-649359)     
	I0211 03:23:13.675752   76224 main.go:141] libmachine: (bridge-649359)     
	I0211 03:23:13.675762   76224 main.go:141] libmachine: (bridge-649359)   </devices>
	I0211 03:23:13.675770   76224 main.go:141] libmachine: (bridge-649359) </domain>
	I0211 03:23:13.675779   76224 main.go:141] libmachine: (bridge-649359) 
	I0211 03:23:13.679996   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2b:62:5e in network default
	I0211 03:23:13.680573   76224 main.go:141] libmachine: (bridge-649359) starting domain...
	I0211 03:23:13.680599   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:13.680607   76224 main.go:141] libmachine: (bridge-649359) ensuring networks are active...
	I0211 03:23:13.681392   76224 main.go:141] libmachine: (bridge-649359) Ensuring network default is active
	I0211 03:23:13.681717   76224 main.go:141] libmachine: (bridge-649359) Ensuring network mk-bridge-649359 is active
	I0211 03:23:13.682447   76224 main.go:141] libmachine: (bridge-649359) getting domain XML...
	I0211 03:23:13.683352   76224 main.go:141] libmachine: (bridge-649359) creating domain...
	I0211 03:23:15.051707   76224 main.go:141] libmachine: (bridge-649359) waiting for IP...
	I0211 03:23:15.052588   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:15.053088   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:15.053161   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:15.053065   76257 retry.go:31] will retry after 209.639096ms: waiting for domain to come up
	I0211 03:23:15.264732   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:15.265421   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:15.265457   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:15.265390   76257 retry.go:31] will retry after 262.285345ms: waiting for domain to come up
	I0211 03:23:15.529778   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:15.530315   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:15.530352   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:15.530309   76257 retry.go:31] will retry after 393.216116ms: waiting for domain to come up
	I0211 03:23:15.924884   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:15.925534   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:15.925564   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:15.925493   76257 retry.go:31] will retry after 419.879829ms: waiting for domain to come up
	I0211 03:23:16.347214   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:16.347785   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:16.347809   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:16.347757   76257 retry.go:31] will retry after 550.153899ms: waiting for domain to come up
	I0211 03:23:16.898975   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:16.899431   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:16.899459   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:16.899394   76257 retry.go:31] will retry after 589.858812ms: waiting for domain to come up
	I0211 03:23:17.491285   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:17.491779   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:17.491824   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:17.491763   76257 retry.go:31] will retry after 928.895182ms: waiting for domain to come up
	I0211 03:23:18.422036   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:18.422602   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:18.422658   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:18.422593   76257 retry.go:31] will retry after 1.417755247s: waiting for domain to come up
	I0211 03:23:19.841760   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:19.842278   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:19.842304   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:19.842255   76257 retry.go:31] will retry after 1.224447824s: waiting for domain to come up
	I0211 03:23:21.068177   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:21.068656   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:21.068684   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:21.068629   76257 retry.go:31] will retry after 1.494225448s: waiting for domain to come up
	I0211 03:23:22.564518   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:22.565104   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:22.565138   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:22.565052   76257 retry.go:31] will retry after 1.772565324s: waiting for domain to come up
	I0211 03:23:26.918722   74474 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0211 03:23:26.918779   74474 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 03:23:26.918857   74474 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 03:23:26.918980   74474 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 03:23:26.919097   74474 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0211 03:23:26.919151   74474 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 03:23:26.920561   74474 out.go:235]   - Generating certificates and keys ...
	I0211 03:23:26.920626   74474 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 03:23:26.920677   74474 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 03:23:26.920733   74474 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0211 03:23:26.920785   74474 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0211 03:23:26.920848   74474 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0211 03:23:26.920901   74474 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0211 03:23:26.920948   74474 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0211 03:23:26.921076   74474 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [flannel-649359 localhost] and IPs [192.168.72.59 127.0.0.1 ::1]
	I0211 03:23:26.921172   74474 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0211 03:23:26.921288   74474 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [flannel-649359 localhost] and IPs [192.168.72.59 127.0.0.1 ::1]
	I0211 03:23:26.921392   74474 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0211 03:23:26.921452   74474 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0211 03:23:26.921498   74474 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0211 03:23:26.921562   74474 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 03:23:26.921630   74474 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 03:23:26.921716   74474 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0211 03:23:26.921805   74474 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 03:23:26.921916   74474 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 03:23:26.922002   74474 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 03:23:26.922131   74474 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 03:23:26.922222   74474 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 03:23:26.923474   74474 out.go:235]   - Booting up control plane ...
	I0211 03:23:26.923561   74474 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 03:23:26.923649   74474 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 03:23:26.923706   74474 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 03:23:26.923812   74474 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 03:23:26.923900   74474 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 03:23:26.923945   74474 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 03:23:26.924054   74474 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0211 03:23:26.924157   74474 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0211 03:23:26.924215   74474 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.704091ms
	I0211 03:23:26.924291   74474 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0211 03:23:26.924363   74474 kubeadm.go:310] [api-check] The API server is healthy after 5.001216504s
	I0211 03:23:26.924491   74474 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0211 03:23:26.924592   74474 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0211 03:23:26.924676   74474 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0211 03:23:26.924893   74474 kubeadm.go:310] [mark-control-plane] Marking the node flannel-649359 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0211 03:23:26.924944   74474 kubeadm.go:310] [bootstrap-token] Using token: ebiq8w.v8x3puwqitndsbjn
	I0211 03:23:26.926051   74474 out.go:235]   - Configuring RBAC rules ...
	I0211 03:23:26.926155   74474 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0211 03:23:26.926227   74474 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0211 03:23:26.926354   74474 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0211 03:23:26.926470   74474 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0211 03:23:26.926585   74474 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0211 03:23:26.926690   74474 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0211 03:23:26.926853   74474 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0211 03:23:26.926942   74474 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0211 03:23:26.927006   74474 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0211 03:23:26.927016   74474 kubeadm.go:310] 
	I0211 03:23:26.927093   74474 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0211 03:23:26.927108   74474 kubeadm.go:310] 
	I0211 03:23:26.927222   74474 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0211 03:23:26.927232   74474 kubeadm.go:310] 
	I0211 03:23:26.927274   74474 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0211 03:23:26.927357   74474 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0211 03:23:26.927439   74474 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0211 03:23:26.927448   74474 kubeadm.go:310] 
	I0211 03:23:26.927495   74474 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0211 03:23:26.927501   74474 kubeadm.go:310] 
	I0211 03:23:26.927541   74474 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0211 03:23:26.927547   74474 kubeadm.go:310] 
	I0211 03:23:26.927590   74474 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0211 03:23:26.927675   74474 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0211 03:23:26.927736   74474 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0211 03:23:26.927743   74474 kubeadm.go:310] 
	I0211 03:23:26.927810   74474 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0211 03:23:26.927871   74474 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0211 03:23:26.927876   74474 kubeadm.go:310] 
	I0211 03:23:26.927943   74474 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ebiq8w.v8x3puwqitndsbjn \
	I0211 03:23:26.928033   74474 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2e161f5cde2e462cb9fb80847b9945297701bdc8e7251bde04f5738d45684f8b \
	I0211 03:23:26.928063   74474 kubeadm.go:310] 	--control-plane 
	I0211 03:23:26.928072   74474 kubeadm.go:310] 
	I0211 03:23:26.928177   74474 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0211 03:23:26.928198   74474 kubeadm.go:310] 
	I0211 03:23:26.928309   74474 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ebiq8w.v8x3puwqitndsbjn \
	I0211 03:23:26.928442   74474 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2e161f5cde2e462cb9fb80847b9945297701bdc8e7251bde04f5738d45684f8b 
	I0211 03:23:26.928455   74474 cni.go:84] Creating CNI manager for "flannel"
	I0211 03:23:26.929837   74474 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0211 03:23:26.931264   74474 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0211 03:23:26.937067   74474 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0211 03:23:26.937088   74474 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0211 03:23:26.954563   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0211 03:23:27.384338   74474 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0211 03:23:27.384400   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:27.384471   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-649359 minikube.k8s.io/updated_at=2025_02_11T03_23_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321 minikube.k8s.io/name=flannel-649359 minikube.k8s.io/primary=true
	I0211 03:23:27.414230   74474 ops.go:34] apiserver oom_adj: -16
	I0211 03:23:27.541442   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:24.339475   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:24.340049   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:24.340078   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:24.340007   76257 retry.go:31] will retry after 2.345457885s: waiting for domain to come up
	I0211 03:23:26.687811   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:26.688293   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:26.688321   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:26.688254   76257 retry.go:31] will retry after 3.825044435s: waiting for domain to come up
	I0211 03:23:28.042372   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:28.541540   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:29.042336   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:29.541956   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:30.041506   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:30.541941   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:31.042153   74474 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:31.157212   74474 kubeadm.go:1113] duration metric: took 3.772846263s to wait for elevateKubeSystemPrivileges
	I0211 03:23:31.157259   74474 kubeadm.go:394] duration metric: took 14.214120371s to StartCluster
	I0211 03:23:31.157284   74474 settings.go:142] acquiring lock: {Name:mkf2645a714cc5873c434b18e1494d4128c48052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:31.157377   74474 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:23:31.158947   74474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/kubeconfig: {Name:mkd961d61f748b29ba3bb0ad55f8216d88f98444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:31.159197   74474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0211 03:23:31.159194   74474 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.72.59 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 03:23:31.159278   74474 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0211 03:23:31.159400   74474 config.go:182] Loaded profile config "flannel-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:23:31.159417   74474 addons.go:69] Setting default-storageclass=true in profile "flannel-649359"
	I0211 03:23:31.159405   74474 addons.go:69] Setting storage-provisioner=true in profile "flannel-649359"
	I0211 03:23:31.159470   74474 addons.go:238] Setting addon storage-provisioner=true in "flannel-649359"
	I0211 03:23:31.159436   74474 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-649359"
	I0211 03:23:31.159522   74474 host.go:66] Checking if "flannel-649359" exists ...
	I0211 03:23:31.159990   74474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:31.160004   74474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:31.160033   74474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:31.160131   74474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:31.160894   74474 out.go:177] * Verifying Kubernetes components...
	I0211 03:23:31.162270   74474 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:23:31.175352   74474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36901
	I0211 03:23:31.175810   74474 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:31.176282   74474 main.go:141] libmachine: Using API Version  1
	I0211 03:23:31.176308   74474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:31.176589   74474 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:31.177193   74474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:31.177240   74474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:31.179744   74474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0211 03:23:31.180108   74474 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:31.180621   74474 main.go:141] libmachine: Using API Version  1
	I0211 03:23:31.180637   74474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:31.180943   74474 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:31.181139   74474 main.go:141] libmachine: (flannel-649359) Calling .GetState
	I0211 03:23:31.184387   74474 addons.go:238] Setting addon default-storageclass=true in "flannel-649359"
	I0211 03:23:31.184427   74474 host.go:66] Checking if "flannel-649359" exists ...
	I0211 03:23:31.184774   74474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:31.184816   74474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:31.192118   74474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36603
	I0211 03:23:31.192590   74474 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:31.193035   74474 main.go:141] libmachine: Using API Version  1
	I0211 03:23:31.193055   74474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:31.193470   74474 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:31.193620   74474 main.go:141] libmachine: (flannel-649359) Calling .GetState
	I0211 03:23:31.195383   74474 main.go:141] libmachine: (flannel-649359) Calling .DriverName
	I0211 03:23:31.196999   74474 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:23:31.198355   74474 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 03:23:31.198376   74474 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0211 03:23:31.198393   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHHostname
	I0211 03:23:31.202178   74474 main.go:141] libmachine: (flannel-649359) DBG | domain flannel-649359 has defined MAC address 52:54:00:7f:c4:fc in network mk-flannel-649359
	I0211 03:23:31.202505   74474 main.go:141] libmachine: (flannel-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:fc", ip: ""} in network mk-flannel-649359: {Iface:virbr4 ExpiryTime:2025-02-11 04:22:58 +0000 UTC Type:0 Mac:52:54:00:7f:c4:fc Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:flannel-649359 Clientid:01:52:54:00:7f:c4:fc}
	I0211 03:23:31.202527   74474 main.go:141] libmachine: (flannel-649359) DBG | domain flannel-649359 has defined IP address 192.168.72.59 and MAC address 52:54:00:7f:c4:fc in network mk-flannel-649359
	I0211 03:23:31.202806   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHPort
	I0211 03:23:31.202984   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHKeyPath
	I0211 03:23:31.203127   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHUsername
	I0211 03:23:31.203233   74474 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/flannel-649359/id_rsa Username:docker}
	I0211 03:23:31.207667   74474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0211 03:23:31.208063   74474 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:31.208524   74474 main.go:141] libmachine: Using API Version  1
	I0211 03:23:31.208541   74474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:31.208888   74474 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:31.209357   74474 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:31.209394   74474 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:31.225708   74474 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41747
	I0211 03:23:31.226181   74474 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:31.226804   74474 main.go:141] libmachine: Using API Version  1
	I0211 03:23:31.226832   74474 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:31.227174   74474 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:31.227498   74474 main.go:141] libmachine: (flannel-649359) Calling .GetState
	I0211 03:23:31.229070   74474 main.go:141] libmachine: (flannel-649359) Calling .DriverName
	I0211 03:23:31.229290   74474 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0211 03:23:31.229308   74474 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0211 03:23:31.229327   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHHostname
	I0211 03:23:31.232390   74474 main.go:141] libmachine: (flannel-649359) DBG | domain flannel-649359 has defined MAC address 52:54:00:7f:c4:fc in network mk-flannel-649359
	I0211 03:23:31.232914   74474 main.go:141] libmachine: (flannel-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:c4:fc", ip: ""} in network mk-flannel-649359: {Iface:virbr4 ExpiryTime:2025-02-11 04:22:58 +0000 UTC Type:0 Mac:52:54:00:7f:c4:fc Iaid: IPaddr:192.168.72.59 Prefix:24 Hostname:flannel-649359 Clientid:01:52:54:00:7f:c4:fc}
	I0211 03:23:31.232938   74474 main.go:141] libmachine: (flannel-649359) DBG | domain flannel-649359 has defined IP address 192.168.72.59 and MAC address 52:54:00:7f:c4:fc in network mk-flannel-649359
	I0211 03:23:31.233011   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHPort
	I0211 03:23:31.233171   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHKeyPath
	I0211 03:23:31.233295   74474 main.go:141] libmachine: (flannel-649359) Calling .GetSSHUsername
	I0211 03:23:31.233530   74474 sshutil.go:53] new ssh client: &{IP:192.168.72.59 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/flannel-649359/id_rsa Username:docker}
	I0211 03:23:31.512598   74474 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:23:31.512814   74474 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0211 03:23:31.535457   74474 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 03:23:31.718768   74474 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0211 03:23:31.971800   74474 start.go:971] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0211 03:23:31.972666   74474 node_ready.go:35] waiting up to 15m0s for node "flannel-649359" to be "Ready" ...
	I0211 03:23:32.239075   74474 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:32.239107   74474 main.go:141] libmachine: (flannel-649359) Calling .Close
	I0211 03:23:32.239165   74474 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:32.239214   74474 main.go:141] libmachine: (flannel-649359) Calling .Close
	I0211 03:23:32.239387   74474 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:32.239408   74474 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:32.239417   74474 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:32.239425   74474 main.go:141] libmachine: (flannel-649359) Calling .Close
	I0211 03:23:32.239485   74474 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:32.239507   74474 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:32.239523   74474 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:32.239527   74474 main.go:141] libmachine: (flannel-649359) DBG | Closing plugin on server side
	I0211 03:23:32.239531   74474 main.go:141] libmachine: (flannel-649359) Calling .Close
	I0211 03:23:32.239604   74474 main.go:141] libmachine: (flannel-649359) DBG | Closing plugin on server side
	I0211 03:23:32.239630   74474 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:32.239641   74474 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:32.239919   74474 main.go:141] libmachine: (flannel-649359) DBG | Closing plugin on server side
	I0211 03:23:32.239936   74474 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:32.239948   74474 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:32.239966   74474 main.go:141] libmachine: (flannel-649359) DBG | Closing plugin on server side
	I0211 03:23:32.251488   74474 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:32.251503   74474 main.go:141] libmachine: (flannel-649359) Calling .Close
	I0211 03:23:32.251785   74474 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:32.251803   74474 main.go:141] libmachine: (flannel-649359) DBG | Closing plugin on server side
	I0211 03:23:32.251805   74474 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:32.253228   74474 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0211 03:23:32.254447   74474 addons.go:514] duration metric: took 1.095190594s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0211 03:23:32.475799   74474 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-649359" context rescaled to 1 replicas
	I0211 03:23:30.516131   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:30.516631   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find current IP address of domain bridge-649359 in network mk-bridge-649359
	I0211 03:23:30.516703   76224 main.go:141] libmachine: (bridge-649359) DBG | I0211 03:23:30.516626   76257 retry.go:31] will retry after 4.666819437s: waiting for domain to come up
	I0211 03:23:33.976110   74474 node_ready.go:53] node "flannel-649359" has status "Ready":"False"
	I0211 03:23:36.477242   74474 node_ready.go:53] node "flannel-649359" has status "Ready":"False"
	I0211 03:23:35.186578   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.187144   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has current primary IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.187203   76224 main.go:141] libmachine: (bridge-649359) found domain IP: 192.168.61.91
	I0211 03:23:35.187227   76224 main.go:141] libmachine: (bridge-649359) reserving static IP address...
	I0211 03:23:35.187589   76224 main.go:141] libmachine: (bridge-649359) DBG | unable to find host DHCP lease matching {name: "bridge-649359", mac: "52:54:00:2f:d7:2b", ip: "192.168.61.91"} in network mk-bridge-649359
	I0211 03:23:35.267976   76224 main.go:141] libmachine: (bridge-649359) DBG | Getting to WaitForSSH function...
	I0211 03:23:35.268008   76224 main.go:141] libmachine: (bridge-649359) reserved static IP address 192.168.61.91 for domain bridge-649359
	I0211 03:23:35.268020   76224 main.go:141] libmachine: (bridge-649359) waiting for SSH...
	I0211 03:23:35.270460   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.270885   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.270915   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.270997   76224 main.go:141] libmachine: (bridge-649359) DBG | Using SSH client type: external
	I0211 03:23:35.271023   76224 main.go:141] libmachine: (bridge-649359) DBG | Using SSH private key: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa (-rw-------)
	I0211 03:23:35.271055   76224 main.go:141] libmachine: (bridge-649359) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0211 03:23:35.271074   76224 main.go:141] libmachine: (bridge-649359) DBG | About to run SSH command:
	I0211 03:23:35.271085   76224 main.go:141] libmachine: (bridge-649359) DBG | exit 0
	I0211 03:23:35.402666   76224 main.go:141] libmachine: (bridge-649359) DBG | SSH cmd err, output: <nil>: 
	I0211 03:23:35.402946   76224 main.go:141] libmachine: (bridge-649359) KVM machine creation complete
	I0211 03:23:35.403256   76224 main.go:141] libmachine: (bridge-649359) Calling .GetConfigRaw
	I0211 03:23:35.403871   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:35.404070   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:35.404224   76224 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0211 03:23:35.404243   76224 main.go:141] libmachine: (bridge-649359) Calling .GetState
	I0211 03:23:35.405533   76224 main.go:141] libmachine: Detecting operating system of created instance...
	I0211 03:23:35.405551   76224 main.go:141] libmachine: Waiting for SSH to be available...
	I0211 03:23:35.405559   76224 main.go:141] libmachine: Getting to WaitForSSH function...
	I0211 03:23:35.405621   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:35.408617   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.409051   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.409125   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.409326   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:35.409495   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.409695   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.409843   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:35.410020   76224 main.go:141] libmachine: Using SSH client type: native
	I0211 03:23:35.410255   76224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0211 03:23:35.410284   76224 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0211 03:23:35.526114   76224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 03:23:35.526137   76224 main.go:141] libmachine: Detecting the provisioner...
	I0211 03:23:35.526147   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:35.529034   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.529374   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.529408   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.529677   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:35.529858   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.529998   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.530142   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:35.530313   76224 main.go:141] libmachine: Using SSH client type: native
	I0211 03:23:35.530522   76224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0211 03:23:35.530536   76224 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0211 03:23:35.635554   76224 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0211 03:23:35.635643   76224 main.go:141] libmachine: found compatible host: buildroot
	I0211 03:23:35.635658   76224 main.go:141] libmachine: Provisioning with buildroot...
	I0211 03:23:35.635673   76224 main.go:141] libmachine: (bridge-649359) Calling .GetMachineName
	I0211 03:23:35.635899   76224 buildroot.go:166] provisioning hostname "bridge-649359"
	I0211 03:23:35.635933   76224 main.go:141] libmachine: (bridge-649359) Calling .GetMachineName
	I0211 03:23:35.636138   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:35.638946   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.639414   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.639443   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.639607   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:35.639805   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.639949   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.640086   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:35.640237   76224 main.go:141] libmachine: Using SSH client type: native
	I0211 03:23:35.640437   76224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0211 03:23:35.640452   76224 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-649359 && echo "bridge-649359" | sudo tee /etc/hostname
	I0211 03:23:35.762221   76224 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-649359
	
	I0211 03:23:35.762265   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:35.765124   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.765565   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.765592   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.765798   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:35.765983   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.766134   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:35.766295   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:35.766454   76224 main.go:141] libmachine: Using SSH client type: native
	I0211 03:23:35.766667   76224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0211 03:23:35.766700   76224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-649359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-649359/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-649359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0211 03:23:35.892813   76224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0211 03:23:35.892848   76224 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20400-12456/.minikube CaCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20400-12456/.minikube}
	I0211 03:23:35.892882   76224 buildroot.go:174] setting up certificates
	I0211 03:23:35.892898   76224 provision.go:84] configureAuth start
	I0211 03:23:35.892911   76224 main.go:141] libmachine: (bridge-649359) Calling .GetMachineName
	I0211 03:23:35.893179   76224 main.go:141] libmachine: (bridge-649359) Calling .GetIP
	I0211 03:23:35.896644   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.896941   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.896984   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.897119   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:35.899782   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.900151   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:35.900194   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:35.900321   76224 provision.go:143] copyHostCerts
	I0211 03:23:35.900388   76224 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem, removing ...
	I0211 03:23:35.900412   76224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem
	I0211 03:23:35.900497   76224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/ca.pem (1078 bytes)
	I0211 03:23:35.900624   76224 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem, removing ...
	I0211 03:23:35.900631   76224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem
	I0211 03:23:35.900661   76224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/cert.pem (1123 bytes)
	I0211 03:23:35.900745   76224 exec_runner.go:144] found /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem, removing ...
	I0211 03:23:35.900752   76224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem
	I0211 03:23:35.900780   76224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20400-12456/.minikube/key.pem (1679 bytes)
	I0211 03:23:35.900854   76224 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem org=jenkins.bridge-649359 san=[127.0.0.1 192.168.61.91 bridge-649359 localhost minikube]
	I0211 03:23:36.073804   76224 provision.go:177] copyRemoteCerts
	I0211 03:23:36.073857   76224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0211 03:23:36.073890   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:36.077003   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.077481   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.077514   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.077769   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:36.077984   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.078141   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:36.078290   76224 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa Username:docker}
	I0211 03:23:36.161453   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0211 03:23:36.189242   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0211 03:23:36.216597   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0211 03:23:36.243510   76224 provision.go:87] duration metric: took 350.596014ms to configureAuth
	I0211 03:23:36.243541   76224 buildroot.go:189] setting minikube options for container-runtime
	I0211 03:23:36.243781   76224 config.go:182] Loaded profile config "bridge-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:23:36.243871   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:36.247213   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.247674   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.247702   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.247936   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:36.248124   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.248314   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.248459   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:36.248635   76224 main.go:141] libmachine: Using SSH client type: native
	I0211 03:23:36.248853   76224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0211 03:23:36.248875   76224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0211 03:23:36.517171   76224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0211 03:23:36.517204   76224 main.go:141] libmachine: Checking connection to Docker...
	I0211 03:23:36.517215   76224 main.go:141] libmachine: (bridge-649359) Calling .GetURL
	I0211 03:23:36.520153   76224 main.go:141] libmachine: (bridge-649359) DBG | using libvirt version 6000000
	I0211 03:23:36.523261   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.523631   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.523666   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.523827   76224 main.go:141] libmachine: Docker is up and running!
	I0211 03:23:36.523859   76224 main.go:141] libmachine: Reticulating splines...
	I0211 03:23:36.523871   76224 client.go:171] duration metric: took 23.332575353s to LocalClient.Create
	I0211 03:23:36.523900   76224 start.go:167] duration metric: took 23.332644457s to libmachine.API.Create "bridge-649359"
	I0211 03:23:36.523913   76224 start.go:293] postStartSetup for "bridge-649359" (driver="kvm2")
	I0211 03:23:36.523929   76224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0211 03:23:36.523954   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:36.524189   76224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0211 03:23:36.524219   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:36.526942   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.527288   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.527312   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.527456   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:36.527617   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.527779   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:36.527903   76224 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa Username:docker}
	I0211 03:23:36.617753   76224 ssh_runner.go:195] Run: cat /etc/os-release
	I0211 03:23:36.622972   76224 info.go:137] Remote host: Buildroot 2023.02.9
	I0211 03:23:36.623024   76224 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/addons for local assets ...
	I0211 03:23:36.623081   76224 filesync.go:126] Scanning /home/jenkins/minikube-integration/20400-12456/.minikube/files for local assets ...
	I0211 03:23:36.623178   76224 filesync.go:149] local asset: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem -> 196452.pem in /etc/ssl/certs
	I0211 03:23:36.623319   76224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0211 03:23:36.636859   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:23:36.666083   76224 start.go:296] duration metric: took 142.139243ms for postStartSetup
	I0211 03:23:36.666143   76224 main.go:141] libmachine: (bridge-649359) Calling .GetConfigRaw
	I0211 03:23:36.666765   76224 main.go:141] libmachine: (bridge-649359) Calling .GetIP
	I0211 03:23:36.670223   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.670654   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.670685   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.671009   76224 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/config.json ...
	I0211 03:23:36.671232   76224 start.go:128] duration metric: took 23.511030087s to createHost
	I0211 03:23:36.671263   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:36.673942   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.674427   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.674449   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.674646   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:36.674823   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.675003   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.675148   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:36.675332   76224 main.go:141] libmachine: Using SSH client type: native
	I0211 03:23:36.675536   76224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0211 03:23:36.675547   76224 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0211 03:23:36.784610   76224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739244216.756269292
	
	I0211 03:23:36.784633   76224 fix.go:216] guest clock: 1739244216.756269292
	I0211 03:23:36.784643   76224 fix.go:229] Guest: 2025-02-11 03:23:36.756269292 +0000 UTC Remote: 2025-02-11 03:23:36.671247216 +0000 UTC m=+23.630270874 (delta=85.022076ms)
	I0211 03:23:36.784669   76224 fix.go:200] guest clock delta is within tolerance: 85.022076ms
	I0211 03:23:36.784676   76224 start.go:83] releasing machines lock for "bridge-649359", held for 23.624569744s
	I0211 03:23:36.784698   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:36.784951   76224 main.go:141] libmachine: (bridge-649359) Calling .GetIP
	I0211 03:23:36.788080   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.788483   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.788511   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.788784   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:36.789525   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:36.789693   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:36.789782   76224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0211 03:23:36.789829   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:36.789909   76224 ssh_runner.go:195] Run: cat /version.json
	I0211 03:23:36.789924   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:36.792931   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.793131   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.793261   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.793288   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.793553   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:36.793622   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:36.793640   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:36.793712   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.793762   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:36.793864   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:36.793870   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:36.793981   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:36.794028   76224 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa Username:docker}
	I0211 03:23:36.794310   76224 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa Username:docker}
	I0211 03:23:36.885484   76224 ssh_runner.go:195] Run: systemctl --version
	I0211 03:23:36.911040   76224 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0211 03:23:37.073802   76224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0211 03:23:37.079969   76224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0211 03:23:37.080026   76224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0211 03:23:37.096890   76224 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0211 03:23:37.096914   76224 start.go:495] detecting cgroup driver to use...
	I0211 03:23:37.096978   76224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0211 03:23:37.120026   76224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0211 03:23:37.139174   76224 docker.go:217] disabling cri-docker service (if available) ...
	I0211 03:23:37.139247   76224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0211 03:23:37.153365   76224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0211 03:23:37.169285   76224 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0211 03:23:37.306563   76224 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0211 03:23:37.460219   76224 docker.go:233] disabling docker service ...
	I0211 03:23:37.460288   76224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0211 03:23:37.479218   76224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0211 03:23:37.493170   76224 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0211 03:23:37.692462   76224 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0211 03:23:37.842158   76224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0211 03:23:37.856153   76224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0211 03:23:37.873979   76224 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0211 03:23:37.874046   76224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.887261   76224 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0211 03:23:37.887332   76224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.899856   76224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.910779   76224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.921712   76224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0211 03:23:37.933184   76224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.943491   76224 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.962546   76224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0211 03:23:37.973182   76224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0211 03:23:37.985022   76224 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0211 03:23:37.985087   76224 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0211 03:23:38.001415   76224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0211 03:23:38.013337   76224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:23:38.159717   76224 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0211 03:23:38.264071   76224 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0211 03:23:38.264138   76224 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0211 03:23:38.269045   76224 start.go:563] Will wait 60s for crictl version
	I0211 03:23:38.269103   76224 ssh_runner.go:195] Run: which crictl
	I0211 03:23:38.273062   76224 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0211 03:23:38.325620   76224 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0211 03:23:38.325708   76224 ssh_runner.go:195] Run: crio --version
	I0211 03:23:38.364258   76224 ssh_runner.go:195] Run: crio --version
	I0211 03:23:38.407529   76224 out.go:177] * Preparing Kubernetes v1.32.1 on CRI-O 1.29.1 ...
	I0211 03:23:38.977889   74474 node_ready.go:49] node "flannel-649359" has status "Ready":"True"
	I0211 03:23:38.977915   74474 node_ready.go:38] duration metric: took 7.005228002s for node "flannel-649359" to be "Ready" ...
	I0211 03:23:38.977927   74474 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:23:38.983312   74474 pod_ready.go:79] waiting up to 15m0s for pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:40.991441   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:38.408768   76224 main.go:141] libmachine: (bridge-649359) Calling .GetIP
	I0211 03:23:38.411980   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:38.412508   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:38.412536   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:38.412760   76224 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0211 03:23:38.417850   76224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 03:23:38.432430   76224 kubeadm.go:883] updating cluster {Name:bridge-649359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-649359 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0211 03:23:38.432565   76224 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 03:23:38.432625   76224 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:23:38.467470   76224 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.1". assuming images are not preloaded.
	I0211 03:23:38.467540   76224 ssh_runner.go:195] Run: which lz4
	I0211 03:23:38.472291   76224 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0211 03:23:38.477628   76224 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0211 03:23:38.477655   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398670900 bytes)
	I0211 03:23:39.812140   76224 crio.go:462] duration metric: took 1.339890711s to copy over tarball
	I0211 03:23:39.812220   76224 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0211 03:23:42.174273   76224 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.362027518s)
	I0211 03:23:42.174298   76224 crio.go:469] duration metric: took 2.362130701s to extract the tarball
	I0211 03:23:42.174308   76224 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0211 03:23:42.212441   76224 ssh_runner.go:195] Run: sudo crictl images --output json
	I0211 03:23:42.259137   76224 crio.go:514] all images are preloaded for cri-o runtime.
	I0211 03:23:42.259167   76224 cache_images.go:84] Images are preloaded, skipping loading
	I0211 03:23:42.259183   76224 kubeadm.go:934] updating node { 192.168.61.91 8443 v1.32.1 crio true true} ...
	I0211 03:23:42.259323   76224 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=bridge-649359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:bridge-649359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0211 03:23:42.259405   76224 ssh_runner.go:195] Run: crio config
	I0211 03:23:42.310270   76224 cni.go:84] Creating CNI manager for "bridge"
	I0211 03:23:42.310296   76224 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0211 03:23:42.310319   76224 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-649359 NodeName:bridge-649359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0211 03:23:42.310444   76224 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-649359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.91"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0211 03:23:42.310509   76224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0211 03:23:42.321508   76224 binaries.go:44] Found k8s binaries, skipping transfer
	I0211 03:23:42.321574   76224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0211 03:23:42.331133   76224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0211 03:23:42.348183   76224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0211 03:23:42.363793   76224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0211 03:23:42.380017   76224 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0211 03:23:42.385032   76224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0211 03:23:42.397563   76224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:23:42.523999   76224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:23:42.541483   76224 certs.go:68] Setting up /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359 for IP: 192.168.61.91
	I0211 03:23:42.541513   76224 certs.go:194] generating shared ca certs ...
	I0211 03:23:42.541537   76224 certs.go:226] acquiring lock for ca certs: {Name:mk14e70e4f3b98aff6eac535114852cc1d70eb3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.541716   76224 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key
	I0211 03:23:42.541775   76224 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key
	I0211 03:23:42.541788   76224 certs.go:256] generating profile certs ...
	I0211 03:23:42.541855   76224 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.key
	I0211 03:23:42.541872   76224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt with IP's: []
	I0211 03:23:42.645334   76224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt ...
	I0211 03:23:42.645359   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.crt: {Name:mk0338e38361e05c75c2b3a994416e9f58924163 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.645552   76224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.key ...
	I0211 03:23:42.645567   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/client.key: {Name:mk6d12c32427bf53d242b075e652c1ff02636b6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.645670   76224 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.key.aa660d69
	I0211 03:23:42.645686   76224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.crt.aa660d69 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.91]
	I0211 03:23:42.778443   76224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.crt.aa660d69 ...
	I0211 03:23:42.778468   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.crt.aa660d69: {Name:mkd0a56a0a00f7f1b41760f04afa85e6e0184dbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.778628   76224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.key.aa660d69 ...
	I0211 03:23:42.778645   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.key.aa660d69: {Name:mke07d566ab34107bd02ef3f5e64b95800771781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.778742   76224 certs.go:381] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.crt.aa660d69 -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.crt
	I0211 03:23:42.778827   76224 certs.go:385] copying /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.key.aa660d69 -> /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.key
	I0211 03:23:42.778920   76224 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.key
	I0211 03:23:42.778940   76224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.crt with IP's: []
	I0211 03:23:42.875092   76224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.crt ...
	I0211 03:23:42.875121   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.crt: {Name:mkeabdd180390de46ff9b6cea91ea3abddccb352 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.875295   76224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.key ...
	I0211 03:23:42.875311   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.key: {Name:mkd0db8b768de67babf0cb224e84ca4a2da93731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:42.875495   76224 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem (1338 bytes)
	W0211 03:23:42.875544   76224 certs.go:480] ignoring /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645_empty.pem, impossibly tiny 0 bytes
	I0211 03:23:42.875560   76224 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca-key.pem (1675 bytes)
	I0211 03:23:42.875605   76224 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/ca.pem (1078 bytes)
	I0211 03:23:42.875637   76224 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/cert.pem (1123 bytes)
	I0211 03:23:42.875670   76224 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/certs/key.pem (1679 bytes)
	I0211 03:23:42.875727   76224 certs.go:484] found cert: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem (1708 bytes)
	I0211 03:23:42.876268   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0211 03:23:42.901656   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0211 03:23:42.927101   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0211 03:23:42.954623   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0211 03:23:42.980378   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0211 03:23:43.009888   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0211 03:23:43.034956   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0211 03:23:43.060866   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/bridge-649359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0211 03:23:43.084114   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0211 03:23:43.105495   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/certs/19645.pem --> /usr/share/ca-certificates/19645.pem (1338 bytes)
	I0211 03:23:43.131325   76224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/ssl/certs/196452.pem --> /usr/share/ca-certificates/196452.pem (1708 bytes)
	I0211 03:23:43.159178   76224 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0211 03:23:43.176120   76224 ssh_runner.go:195] Run: openssl version
	I0211 03:23:43.181981   76224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0211 03:23:43.192433   76224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:23:43.196878   76224 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 11 02:02 /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:23:43.196935   76224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0211 03:23:43.202795   76224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0211 03:23:43.212979   76224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19645.pem && ln -fs /usr/share/ca-certificates/19645.pem /etc/ssl/certs/19645.pem"
	I0211 03:23:43.228900   76224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19645.pem
	I0211 03:23:43.234639   76224 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 11 02:09 /usr/share/ca-certificates/19645.pem
	I0211 03:23:43.234695   76224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19645.pem
	I0211 03:23:43.242209   76224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/19645.pem /etc/ssl/certs/51391683.0"
	I0211 03:23:43.259478   76224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/196452.pem && ln -fs /usr/share/ca-certificates/196452.pem /etc/ssl/certs/196452.pem"
	I0211 03:23:43.276539   76224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/196452.pem
	I0211 03:23:43.282230   76224 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 11 02:09 /usr/share/ca-certificates/196452.pem
	I0211 03:23:43.282289   76224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/196452.pem
	I0211 03:23:43.289640   76224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/196452.pem /etc/ssl/certs/3ec20f2e.0"
	I0211 03:23:43.306157   76224 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0211 03:23:43.311168   76224 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0211 03:23:43.311222   76224 kubeadm.go:392] StartCluster: {Name:bridge-649359 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:bridge-649359 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 03:23:43.311303   76224 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0211 03:23:43.311364   76224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0211 03:23:43.347602   76224 cri.go:89] found id: ""
	I0211 03:23:43.347669   76224 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0211 03:23:43.356681   76224 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0211 03:23:43.365865   76224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0211 03:23:43.377932   76224 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0211 03:23:43.377951   76224 kubeadm.go:157] found existing configuration files:
	
	I0211 03:23:43.377992   76224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0211 03:23:43.388582   76224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0211 03:23:43.388647   76224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0211 03:23:43.398236   76224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0211 03:23:43.409266   76224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0211 03:23:43.409330   76224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0211 03:23:43.418959   76224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0211 03:23:43.427405   76224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0211 03:23:43.427446   76224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0211 03:23:43.436462   76224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0211 03:23:43.448305   76224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0211 03:23:43.448356   76224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0211 03:23:43.457848   76224 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0211 03:23:43.514222   76224 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0211 03:23:43.514402   76224 kubeadm.go:310] [preflight] Running pre-flight checks
	I0211 03:23:43.614485   76224 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0211 03:23:43.614631   76224 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0211 03:23:43.614780   76224 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0211 03:23:43.627379   76224 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0211 03:23:43.960452   76224 out.go:235]   - Generating certificates and keys ...
	I0211 03:23:43.960574   76224 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0211 03:23:43.960650   76224 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0211 03:23:43.960751   76224 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0211 03:23:44.201857   76224 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0211 03:23:44.476985   76224 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0211 03:23:44.542297   76224 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0211 03:23:44.649174   76224 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0211 03:23:44.649387   76224 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [bridge-649359 localhost] and IPs [192.168.61.91 127.0.0.1 ::1]
	I0211 03:23:44.759204   76224 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0211 03:23:44.759426   76224 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [bridge-649359 localhost] and IPs [192.168.61.91 127.0.0.1 ::1]
	I0211 03:23:44.917127   76224 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0211 03:23:45.151892   76224 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0211 03:23:45.330088   76224 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0211 03:23:45.330501   76224 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0211 03:23:45.521286   76224 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0211 03:23:45.681260   76224 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0211 03:23:45.749229   76224 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0211 03:23:45.847105   76224 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0211 03:23:45.955946   76224 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0211 03:23:45.956453   76224 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0211 03:23:45.960579   76224 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0211 03:23:43.489331   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:45.489732   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:46.051204   76224 out.go:235]   - Booting up control plane ...
	I0211 03:23:46.051361   76224 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0211 03:23:46.051453   76224 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0211 03:23:46.051580   76224 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0211 03:23:46.051746   76224 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0211 03:23:46.051879   76224 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0211 03:23:46.051971   76224 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0211 03:23:46.119740   76224 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0211 03:23:46.119907   76224 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0211 03:23:46.620433   76224 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.018001ms
	I0211 03:23:46.620551   76224 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0211 03:23:51.619644   76224 kubeadm.go:310] [api-check] The API server is healthy after 5.002019751s
	I0211 03:23:51.638423   76224 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0211 03:23:51.657182   76224 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0211 03:23:51.698624   76224 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0211 03:23:51.698895   76224 kubeadm.go:310] [mark-control-plane] Marking the node bridge-649359 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0211 03:23:51.711403   76224 kubeadm.go:310] [bootstrap-token] Using token: 8iaz75.2wbh73x0qbtaotir
	I0211 03:23:47.989129   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:50.488696   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:52.489440   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:51.712648   76224 out.go:235]   - Configuring RBAC rules ...
	I0211 03:23:51.712802   76224 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0211 03:23:51.722004   76224 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0211 03:23:51.731386   76224 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0211 03:23:51.735273   76224 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0211 03:23:51.739648   76224 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0211 03:23:51.748713   76224 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0211 03:23:52.026065   76224 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0211 03:23:52.448492   76224 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0211 03:23:53.026361   76224 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0211 03:23:53.026394   76224 kubeadm.go:310] 
	I0211 03:23:53.026473   76224 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0211 03:23:53.026485   76224 kubeadm.go:310] 
	I0211 03:23:53.026596   76224 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0211 03:23:53.026607   76224 kubeadm.go:310] 
	I0211 03:23:53.026640   76224 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0211 03:23:53.026761   76224 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0211 03:23:53.026848   76224 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0211 03:23:53.026861   76224 kubeadm.go:310] 
	I0211 03:23:53.026960   76224 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0211 03:23:53.026972   76224 kubeadm.go:310] 
	I0211 03:23:53.027033   76224 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0211 03:23:53.027047   76224 kubeadm.go:310] 
	I0211 03:23:53.027091   76224 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0211 03:23:53.027155   76224 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0211 03:23:53.027225   76224 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0211 03:23:53.027242   76224 kubeadm.go:310] 
	I0211 03:23:53.027376   76224 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0211 03:23:53.027479   76224 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0211 03:23:53.027489   76224 kubeadm.go:310] 
	I0211 03:23:53.027588   76224 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8iaz75.2wbh73x0qbtaotir \
	I0211 03:23:53.027754   76224 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2e161f5cde2e462cb9fb80847b9945297701bdc8e7251bde04f5738d45684f8b \
	I0211 03:23:53.027794   76224 kubeadm.go:310] 	--control-plane 
	I0211 03:23:53.027804   76224 kubeadm.go:310] 
	I0211 03:23:53.027924   76224 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0211 03:23:53.027934   76224 kubeadm.go:310] 
	I0211 03:23:53.028040   76224 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8iaz75.2wbh73x0qbtaotir \
	I0211 03:23:53.028162   76224 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:2e161f5cde2e462cb9fb80847b9945297701bdc8e7251bde04f5738d45684f8b 
	I0211 03:23:53.028360   76224 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0211 03:23:53.028615   76224 cni.go:84] Creating CNI manager for "bridge"
	I0211 03:23:53.030131   76224 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0211 03:23:53.031456   76224 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0211 03:23:53.042986   76224 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0211 03:23:53.061766   76224 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0211 03:23:53.061896   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:53.061898   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes bridge-649359 minikube.k8s.io/updated_at=2025_02_11T03_23_53_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=8e91f70b9b442caa4bec80b031add390ac34d321 minikube.k8s.io/name=bridge-649359 minikube.k8s.io/primary=true
	I0211 03:23:53.200359   76224 ops.go:34] apiserver oom_adj: -16
	I0211 03:23:53.203298   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:53.704043   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:54.204265   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:54.704063   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:55.204057   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:55.704093   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:56.204089   76224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0211 03:23:56.285166   76224 kubeadm.go:1113] duration metric: took 3.223324508s to wait for elevateKubeSystemPrivileges
	I0211 03:23:56.285198   76224 kubeadm.go:394] duration metric: took 12.973978579s to StartCluster
	I0211 03:23:56.285228   76224 settings.go:142] acquiring lock: {Name:mkf2645a714cc5873c434b18e1494d4128c48052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:56.285310   76224 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:23:56.286865   76224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/kubeconfig: {Name:mkd961d61f748b29ba3bb0ad55f8216d88f98444 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 03:23:56.287154   76224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0211 03:23:56.287177   76224 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0211 03:23:56.287229   76224 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0211 03:23:56.287317   76224 addons.go:69] Setting storage-provisioner=true in profile "bridge-649359"
	I0211 03:23:56.287337   76224 addons.go:238] Setting addon storage-provisioner=true in "bridge-649359"
	I0211 03:23:56.287349   76224 addons.go:69] Setting default-storageclass=true in profile "bridge-649359"
	I0211 03:23:56.287368   76224 host.go:66] Checking if "bridge-649359" exists ...
	I0211 03:23:56.287392   76224 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-649359"
	I0211 03:23:56.287444   76224 config.go:182] Loaded profile config "bridge-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:23:56.287874   76224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:56.287916   76224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:56.287929   76224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:56.287963   76224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:56.288527   76224 out.go:177] * Verifying Kubernetes components...
	I0211 03:23:56.289823   76224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0211 03:23:56.304775   76224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34409
	I0211 03:23:56.305153   76224 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:56.305632   76224 main.go:141] libmachine: Using API Version  1
	I0211 03:23:56.305658   76224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:56.305984   76224 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:56.306239   76224 main.go:141] libmachine: (bridge-649359) Calling .GetState
	I0211 03:23:56.308210   76224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0211 03:23:56.308528   76224 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:56.308996   76224 main.go:141] libmachine: Using API Version  1
	I0211 03:23:56.309015   76224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:56.309301   76224 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:56.309726   76224 addons.go:238] Setting addon default-storageclass=true in "bridge-649359"
	I0211 03:23:56.309764   76224 host.go:66] Checking if "bridge-649359" exists ...
	I0211 03:23:56.309864   76224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:56.309894   76224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:56.310098   76224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:56.310135   76224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:56.324154   76224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41297
	I0211 03:23:56.324556   76224 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:56.325003   76224 main.go:141] libmachine: Using API Version  1
	I0211 03:23:56.325017   76224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:56.325276   76224 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:56.325490   76224 main.go:141] libmachine: (bridge-649359) Calling .GetState
	I0211 03:23:56.325869   76224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42691
	I0211 03:23:56.326407   76224 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:56.326922   76224 main.go:141] libmachine: Using API Version  1
	I0211 03:23:56.326946   76224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:56.327340   76224 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:56.327456   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:56.327952   76224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 03:23:56.327990   76224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 03:23:56.329016   76224 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0211 03:23:54.492072   74474 pod_ready.go:103] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"False"
	I0211 03:23:55.494321   74474 pod_ready.go:93] pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:55.494351   74474 pod_ready.go:82] duration metric: took 16.51100516s for pod "coredns-668d6bf9bc-ktrqg" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.494365   74474 pod_ready.go:79] waiting up to 15m0s for pod "etcd-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.507277   74474 pod_ready.go:93] pod "etcd-flannel-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:55.507298   74474 pod_ready.go:82] duration metric: took 12.925664ms for pod "etcd-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.507308   74474 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.516108   74474 pod_ready.go:93] pod "kube-apiserver-flannel-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:55.516128   74474 pod_ready.go:82] duration metric: took 8.814128ms for pod "kube-apiserver-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.516137   74474 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.523571   74474 pod_ready.go:93] pod "kube-controller-manager-flannel-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:55.523589   74474 pod_ready.go:82] duration metric: took 7.446719ms for pod "kube-controller-manager-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.523597   74474 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-svqjf" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.528072   74474 pod_ready.go:93] pod "kube-proxy-svqjf" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:55.528088   74474 pod_ready.go:82] duration metric: took 4.48524ms for pod "kube-proxy-svqjf" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.528096   74474 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.887698   74474 pod_ready.go:93] pod "kube-scheduler-flannel-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:23:55.887720   74474 pod_ready.go:82] duration metric: took 359.618939ms for pod "kube-scheduler-flannel-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:55.887735   74474 pod_ready.go:39] duration metric: took 16.909780883s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:23:55.887755   74474 api_server.go:52] waiting for apiserver process to appear ...
	I0211 03:23:55.887802   74474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:23:55.902278   74474 api_server.go:72] duration metric: took 24.7430451s to wait for apiserver process to appear ...
	I0211 03:23:55.902297   74474 api_server.go:88] waiting for apiserver healthz status ...
	I0211 03:23:55.902312   74474 api_server.go:253] Checking apiserver healthz at https://192.168.72.59:8443/healthz ...
	I0211 03:23:55.907542   74474 api_server.go:279] https://192.168.72.59:8443/healthz returned 200:
	ok
	I0211 03:23:55.908534   74474 api_server.go:141] control plane version: v1.32.1
	I0211 03:23:55.908552   74474 api_server.go:131] duration metric: took 6.249772ms to wait for apiserver health ...
	I0211 03:23:55.908559   74474 system_pods.go:43] waiting for kube-system pods to appear ...
	I0211 03:23:56.088276   74474 system_pods.go:59] 7 kube-system pods found
	I0211 03:23:56.088313   74474 system_pods.go:61] "coredns-668d6bf9bc-ktrqg" [25de257a-811d-450d-9f38-d3cbbe560bc7] Running
	I0211 03:23:56.088320   74474 system_pods.go:61] "etcd-flannel-649359" [446a2b15-8be7-4fbb-9ba5-80ad99efe86f] Running
	I0211 03:23:56.088326   74474 system_pods.go:61] "kube-apiserver-flannel-649359" [3aa2d94e-4bac-4f10-92eb-62c5bc8a9497] Running
	I0211 03:23:56.088331   74474 system_pods.go:61] "kube-controller-manager-flannel-649359" [e38e3569-0f84-46e3-9cb3-d55fc351cf71] Running
	I0211 03:23:56.088335   74474 system_pods.go:61] "kube-proxy-svqjf" [837e961f-4d98-436b-8d7b-1c58fc12c210] Running
	I0211 03:23:56.088340   74474 system_pods.go:61] "kube-scheduler-flannel-649359" [dc8b4f66-a15b-429e-8c5d-564222514190] Running
	I0211 03:23:56.088344   74474 system_pods.go:61] "storage-provisioner" [4006f839-4055-49b7-a80b-727ea6577959] Running
	I0211 03:23:56.088352   74474 system_pods.go:74] duration metric: took 179.787278ms to wait for pod list to return data ...
	I0211 03:23:56.088362   74474 default_sa.go:34] waiting for default service account to be created ...
	I0211 03:23:56.291065   74474 default_sa.go:45] found service account: "default"
	I0211 03:23:56.291104   74474 default_sa.go:55] duration metric: took 202.726139ms for default service account to be created ...
	I0211 03:23:56.291122   74474 system_pods.go:116] waiting for k8s-apps to be running ...
	I0211 03:23:56.488817   74474 system_pods.go:86] 7 kube-system pods found
	I0211 03:23:56.488843   74474 system_pods.go:89] "coredns-668d6bf9bc-ktrqg" [25de257a-811d-450d-9f38-d3cbbe560bc7] Running
	I0211 03:23:56.488849   74474 system_pods.go:89] "etcd-flannel-649359" [446a2b15-8be7-4fbb-9ba5-80ad99efe86f] Running
	I0211 03:23:56.488852   74474 system_pods.go:89] "kube-apiserver-flannel-649359" [3aa2d94e-4bac-4f10-92eb-62c5bc8a9497] Running
	I0211 03:23:56.488856   74474 system_pods.go:89] "kube-controller-manager-flannel-649359" [e38e3569-0f84-46e3-9cb3-d55fc351cf71] Running
	I0211 03:23:56.488859   74474 system_pods.go:89] "kube-proxy-svqjf" [837e961f-4d98-436b-8d7b-1c58fc12c210] Running
	I0211 03:23:56.488862   74474 system_pods.go:89] "kube-scheduler-flannel-649359" [dc8b4f66-a15b-429e-8c5d-564222514190] Running
	I0211 03:23:56.488865   74474 system_pods.go:89] "storage-provisioner" [4006f839-4055-49b7-a80b-727ea6577959] Running
	I0211 03:23:56.488872   74474 system_pods.go:126] duration metric: took 197.742474ms to wait for k8s-apps to be running ...
	I0211 03:23:56.488878   74474 system_svc.go:44] waiting for kubelet service to be running ....
	I0211 03:23:56.488917   74474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 03:23:56.505335   74474 system_svc.go:56] duration metric: took 16.448357ms WaitForService to wait for kubelet
	I0211 03:23:56.505361   74474 kubeadm.go:582] duration metric: took 25.346130872s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:23:56.505377   74474 node_conditions.go:102] verifying NodePressure condition ...
	I0211 03:23:56.689125   74474 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0211 03:23:56.689166   74474 node_conditions.go:123] node cpu capacity is 2
	I0211 03:23:56.689183   74474 node_conditions.go:105] duration metric: took 183.800285ms to run NodePressure ...
	I0211 03:23:56.689199   74474 start.go:241] waiting for startup goroutines ...
	I0211 03:23:56.689208   74474 start.go:246] waiting for cluster config update ...
	I0211 03:23:56.689224   74474 start.go:255] writing updated cluster config ...
	I0211 03:23:56.689599   74474 ssh_runner.go:195] Run: rm -f paused
	I0211 03:23:56.738058   74474 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0211 03:23:56.739640   74474 out.go:177] * Done! kubectl is now configured to use "flannel-649359" cluster and "default" namespace by default
	I0211 03:23:56.330378   76224 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 03:23:56.330399   76224 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0211 03:23:56.330425   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:56.336620   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:56.337050   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:56.337074   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:56.337308   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:56.337511   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:56.337643   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:56.337805   76224 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa Username:docker}
	I0211 03:23:56.344193   76224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0211 03:23:56.344682   76224 main.go:141] libmachine: () Calling .GetVersion
	I0211 03:23:56.345240   76224 main.go:141] libmachine: Using API Version  1
	I0211 03:23:56.345257   76224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 03:23:56.345755   76224 main.go:141] libmachine: () Calling .GetMachineName
	I0211 03:23:56.345926   76224 main.go:141] libmachine: (bridge-649359) Calling .GetState
	I0211 03:23:56.347412   76224 main.go:141] libmachine: (bridge-649359) Calling .DriverName
	I0211 03:23:56.347649   76224 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0211 03:23:56.347664   76224 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0211 03:23:56.347680   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHHostname
	I0211 03:23:56.350378   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:56.350643   76224 main.go:141] libmachine: (bridge-649359) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2f:d7:2b", ip: ""} in network mk-bridge-649359: {Iface:virbr3 ExpiryTime:2025-02-11 04:23:29 +0000 UTC Type:0 Mac:52:54:00:2f:d7:2b Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:bridge-649359 Clientid:01:52:54:00:2f:d7:2b}
	I0211 03:23:56.350662   76224 main.go:141] libmachine: (bridge-649359) DBG | domain bridge-649359 has defined IP address 192.168.61.91 and MAC address 52:54:00:2f:d7:2b in network mk-bridge-649359
	I0211 03:23:56.350793   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHPort
	I0211 03:23:56.350940   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHKeyPath
	I0211 03:23:56.351037   76224 main.go:141] libmachine: (bridge-649359) Calling .GetSSHUsername
	I0211 03:23:56.351138   76224 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/bridge-649359/id_rsa Username:docker}
	I0211 03:23:56.443520   76224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0211 03:23:56.464811   76224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0211 03:23:56.601622   76224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0211 03:23:56.647861   76224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0211 03:23:56.917727   76224 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0211 03:23:56.917816   76224 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:56.917847   76224 main.go:141] libmachine: (bridge-649359) Calling .Close
	I0211 03:23:56.918167   76224 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:56.918188   76224 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:56.918198   76224 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:56.918207   76224 main.go:141] libmachine: (bridge-649359) Calling .Close
	I0211 03:23:56.918959   76224 main.go:141] libmachine: (bridge-649359) DBG | Closing plugin on server side
	I0211 03:23:56.918996   76224 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:56.919011   76224 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:56.919119   76224 node_ready.go:35] waiting up to 15m0s for node "bridge-649359" to be "Ready" ...
	I0211 03:23:56.932685   76224 node_ready.go:49] node "bridge-649359" has status "Ready":"True"
	I0211 03:23:56.932704   76224 node_ready.go:38] duration metric: took 13.54605ms for node "bridge-649359" to be "Ready" ...
	I0211 03:23:56.932714   76224 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:23:56.951563   76224 pod_ready.go:79] waiting up to 15m0s for pod "etcd-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:23:56.951959   76224 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:56.951976   76224 main.go:141] libmachine: (bridge-649359) Calling .Close
	I0211 03:23:56.952230   76224 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:56.952236   76224 main.go:141] libmachine: (bridge-649359) DBG | Closing plugin on server side
	I0211 03:23:56.952247   76224 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:57.178049   76224 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:57.178075   76224 main.go:141] libmachine: (bridge-649359) Calling .Close
	I0211 03:23:57.178329   76224 main.go:141] libmachine: (bridge-649359) DBG | Closing plugin on server side
	I0211 03:23:57.178352   76224 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:57.178365   76224 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:57.178374   76224 main.go:141] libmachine: Making call to close driver server
	I0211 03:23:57.178381   76224 main.go:141] libmachine: (bridge-649359) Calling .Close
	I0211 03:23:57.178603   76224 main.go:141] libmachine: Successfully made call to close driver server
	I0211 03:23:57.178624   76224 main.go:141] libmachine: Making call to close connection to plugin binary
	I0211 03:23:57.178629   76224 main.go:141] libmachine: (bridge-649359) DBG | Closing plugin on server side
	I0211 03:23:57.180985   76224 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0211 03:23:57.182315   76224 addons.go:514] duration metric: took 895.081368ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0211 03:23:57.423036   76224 kapi.go:214] "coredns" deployment in "kube-system" namespace and "bridge-649359" context rescaled to 1 replicas
	I0211 03:23:58.957485   76224 pod_ready.go:103] pod "etcd-bridge-649359" in "kube-system" namespace has status "Ready":"False"
	I0211 03:24:01.457551   76224 pod_ready.go:103] pod "etcd-bridge-649359" in "kube-system" namespace has status "Ready":"False"
	I0211 03:24:03.462842   76224 pod_ready.go:103] pod "etcd-bridge-649359" in "kube-system" namespace has status "Ready":"False"
	I0211 03:24:05.957301   76224 pod_ready.go:103] pod "etcd-bridge-649359" in "kube-system" namespace has status "Ready":"False"
	I0211 03:24:06.957208   76224 pod_ready.go:93] pod "etcd-bridge-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:24:06.957232   76224 pod_ready.go:82] duration metric: took 10.005644253s for pod "etcd-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.957244   76224 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.960650   76224 pod_ready.go:93] pod "kube-apiserver-bridge-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:24:06.960668   76224 pod_ready.go:82] duration metric: took 3.416505ms for pod "kube-apiserver-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.960679   76224 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.965014   76224 pod_ready.go:93] pod "kube-controller-manager-bridge-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:24:06.965028   76224 pod_ready.go:82] duration metric: took 4.342119ms for pod "kube-controller-manager-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.965035   76224 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-9q77c" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.969396   76224 pod_ready.go:93] pod "kube-proxy-9q77c" in "kube-system" namespace has status "Ready":"True"
	I0211 03:24:06.969409   76224 pod_ready.go:82] duration metric: took 4.370008ms for pod "kube-proxy-9q77c" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.969416   76224 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.973028   76224 pod_ready.go:93] pod "kube-scheduler-bridge-649359" in "kube-system" namespace has status "Ready":"True"
	I0211 03:24:06.973041   76224 pod_ready.go:82] duration metric: took 3.620954ms for pod "kube-scheduler-bridge-649359" in "kube-system" namespace to be "Ready" ...
	I0211 03:24:06.973047   76224 pod_ready.go:39] duration metric: took 10.040322007s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0211 03:24:06.973061   76224 api_server.go:52] waiting for apiserver process to appear ...
	I0211 03:24:06.973101   76224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 03:24:06.987215   76224 api_server.go:72] duration metric: took 10.699994196s to wait for apiserver process to appear ...
	I0211 03:24:06.987240   76224 api_server.go:88] waiting for apiserver healthz status ...
	I0211 03:24:06.987262   76224 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0211 03:24:06.993172   76224 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0211 03:24:06.994390   76224 api_server.go:141] control plane version: v1.32.1
	I0211 03:24:06.994411   76224 api_server.go:131] duration metric: took 7.16426ms to wait for apiserver health ...
	I0211 03:24:06.994418   76224 system_pods.go:43] waiting for kube-system pods to appear ...
	I0211 03:24:07.157057   76224 system_pods.go:59] 7 kube-system pods found
	I0211 03:24:07.157089   76224 system_pods.go:61] "coredns-668d6bf9bc-jfw64" [c6a15f81-0759-41df-957c-d7ad97cc9a6a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:24:07.157095   76224 system_pods.go:61] "etcd-bridge-649359" [6077ad50-145b-49f1-96b4-ba1fb2c2b33c] Running
	I0211 03:24:07.157101   76224 system_pods.go:61] "kube-apiserver-bridge-649359" [cf9c4983-d1a8-481c-ae23-8867414f715c] Running
	I0211 03:24:07.157105   76224 system_pods.go:61] "kube-controller-manager-bridge-649359" [f85cff22-a57e-45ba-9e4b-1583816a9ccb] Running
	I0211 03:24:07.157109   76224 system_pods.go:61] "kube-proxy-9q77c" [be4d3372-9382-4dbd-a850-5729fa3918a5] Running
	I0211 03:24:07.157112   76224 system_pods.go:61] "kube-scheduler-bridge-649359" [d85c8067-6a92-455f-8eb2-bc0f5e7b2d5c] Running
	I0211 03:24:07.157115   76224 system_pods.go:61] "storage-provisioner" [446d17e1-30af-4afc-86b0-f55654c31967] Running
	I0211 03:24:07.157122   76224 system_pods.go:74] duration metric: took 162.698222ms to wait for pod list to return data ...
	I0211 03:24:07.157128   76224 default_sa.go:34] waiting for default service account to be created ...
	I0211 03:24:07.356681   76224 default_sa.go:45] found service account: "default"
	I0211 03:24:07.356714   76224 default_sa.go:55] duration metric: took 199.579483ms for default service account to be created ...
	I0211 03:24:07.356726   76224 system_pods.go:116] waiting for k8s-apps to be running ...
	I0211 03:24:07.556397   76224 system_pods.go:86] 7 kube-system pods found
	I0211 03:24:07.556432   76224 system_pods.go:89] "coredns-668d6bf9bc-jfw64" [c6a15f81-0759-41df-957c-d7ad97cc9a6a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0211 03:24:07.556440   76224 system_pods.go:89] "etcd-bridge-649359" [6077ad50-145b-49f1-96b4-ba1fb2c2b33c] Running
	I0211 03:24:07.556446   76224 system_pods.go:89] "kube-apiserver-bridge-649359" [cf9c4983-d1a8-481c-ae23-8867414f715c] Running
	I0211 03:24:07.556451   76224 system_pods.go:89] "kube-controller-manager-bridge-649359" [f85cff22-a57e-45ba-9e4b-1583816a9ccb] Running
	I0211 03:24:07.556456   76224 system_pods.go:89] "kube-proxy-9q77c" [be4d3372-9382-4dbd-a850-5729fa3918a5] Running
	I0211 03:24:07.556460   76224 system_pods.go:89] "kube-scheduler-bridge-649359" [d85c8067-6a92-455f-8eb2-bc0f5e7b2d5c] Running
	I0211 03:24:07.556464   76224 system_pods.go:89] "storage-provisioner" [446d17e1-30af-4afc-86b0-f55654c31967] Running
	I0211 03:24:07.556471   76224 system_pods.go:126] duration metric: took 199.7395ms to wait for k8s-apps to be running ...
	I0211 03:24:07.556478   76224 system_svc.go:44] waiting for kubelet service to be running ....
	I0211 03:24:07.556519   76224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 03:24:07.571047   76224 system_svc.go:56] duration metric: took 14.559978ms WaitForService to wait for kubelet
	I0211 03:24:07.571081   76224 kubeadm.go:582] duration metric: took 11.283863044s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0211 03:24:07.571111   76224 node_conditions.go:102] verifying NodePressure condition ...
	I0211 03:24:07.756253   76224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0211 03:24:07.756287   76224 node_conditions.go:123] node cpu capacity is 2
	I0211 03:24:07.756301   76224 node_conditions.go:105] duration metric: took 185.167323ms to run NodePressure ...
	I0211 03:24:07.756318   76224 start.go:241] waiting for startup goroutines ...
	I0211 03:24:07.756328   76224 start.go:246] waiting for cluster config update ...
	I0211 03:24:07.756342   76224 start.go:255] writing updated cluster config ...
	I0211 03:24:07.756642   76224 ssh_runner.go:195] Run: rm -f paused
	I0211 03:24:07.803293   76224 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0211 03:24:07.805468   76224 out.go:177] * Done! kubectl is now configured to use "bridge-649359" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.542007627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739245064541982607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b81eabb-36aa-4312-b76b-f8ac35a7c3a8 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.542447412Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=664bf661-a4ed-4125-b752-45bd1c82224f name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.542519669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=664bf661-a4ed-4125-b752-45bd1c82224f name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.542573870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=664bf661-a4ed-4125-b752-45bd1c82224f name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.570132736Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=058ae655-5632-4060-bad4-d13016cfd9a7 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.570218252Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=058ae655-5632-4060-bad4-d13016cfd9a7 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.571371072Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=112d8936-4ad2-46ce-9a12-fa2cffd661d3 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.571777212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739245064571757567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=112d8936-4ad2-46ce-9a12-fa2cffd661d3 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.572236373Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c02a7fa-4218-4410-b464-366cc46ffe75 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.572310237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c02a7fa-4218-4410-b464-366cc46ffe75 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.572345926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0c02a7fa-4218-4410-b464-366cc46ffe75 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.601258294Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d665f04a-bdec-4595-8068-aa3cb27562b0 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.601330903Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d665f04a-bdec-4595-8068-aa3cb27562b0 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.602360548Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9d5f16c-ffae-479f-929d-ed477dc90d60 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.602745427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739245064602725807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9d5f16c-ffae-479f-929d-ed477dc90d60 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.603244361Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a0ff941-0af1-4af1-87b7-0f6eee1506a5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.603291073Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a0ff941-0af1-4af1-87b7-0f6eee1506a5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.603327885Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5a0ff941-0af1-4af1-87b7-0f6eee1506a5 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.632484090Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=383062e8-d422-473a-a9c2-177cc18c69f0 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.632571319Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=383062e8-d422-473a-a9c2-177cc18c69f0 name=/runtime.v1.RuntimeService/Version
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.633768780Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d66855cd-11fa-4cfc-8350-6580efd846e3 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.634127827Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1739245064634100189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d66855cd-11fa-4cfc-8350-6580efd846e3 name=/runtime.v1.ImageService/ImageFsInfo
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.634563263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7564d455-9c9b-46d9-b421-fd71e7e7c200 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.634606612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7564d455-9c9b-46d9-b421-fd71e7e7c200 name=/runtime.v1.RuntimeService/ListContainers
	Feb 11 03:37:44 old-k8s-version-244815 crio[625]: time="2025-02-11 03:37:44.634636571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7564d455-9c9b-46d9-b421-fd71e7e7c200 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Feb11 03:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053978] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039203] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.074931] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.949355] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.579835] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.978569] systemd-fstab-generator[552]: Ignoring "noauto" option for root device
	[  +0.065488] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058561] systemd-fstab-generator[564]: Ignoring "noauto" option for root device
	[  +0.199108] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.179272] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.277363] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +6.347509] systemd-fstab-generator[869]: Ignoring "noauto" option for root device
	[  +0.058497] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.802316] systemd-fstab-generator[993]: Ignoring "noauto" option for root device
	[ +12.357601] kauditd_printk_skb: 46 callbacks suppressed
	[Feb11 03:18] systemd-fstab-generator[5029]: Ignoring "noauto" option for root device
	[Feb11 03:20] systemd-fstab-generator[5308]: Ignoring "noauto" option for root device
	[  +0.093052] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 03:37:44 up 23 min,  0 users,  load average: 0.00, 0.00, 0.04
	Linux old-k8s-version-244815 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0002a00c0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc0009a0780, 0x24, 0x0, ...)
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]: net.(*Dialer).DialContext(0xc000128a80, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009a0780, 0x24, 0x0, 0x0, 0x0, ...)
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc00067fba0, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009a0780, 0x24, 0x60, 0x7f0260eed1b0, 0x118, ...)
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]: net/http.(*Transport).dial(0xc00089c000, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc0009a0780, 0x24, 0x0, 0x71a00000281, 0x60e, ...)
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]: net/http.(*Transport).dialConn(0xc00089c000, 0x4f7fe00, 0xc000052030, 0x0, 0xc00037a600, 0x5, 0xc0009a0780, 0x24, 0x0, 0xc00071b680, ...)
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]: net/http.(*Transport).dialConnFor(0xc00089c000, 0xc000731d90)
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]: created by net/http.(*Transport).queueForDial
	Feb 11 03:37:42 old-k8s-version-244815 kubelet[7161]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Feb 11 03:37:42 old-k8s-version-244815 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Feb 11 03:37:42 old-k8s-version-244815 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Feb 11 03:37:43 old-k8s-version-244815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 179.
	Feb 11 03:37:43 old-k8s-version-244815 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Feb 11 03:37:43 old-k8s-version-244815 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Feb 11 03:37:43 old-k8s-version-244815 kubelet[7170]: I0211 03:37:43.632902    7170 server.go:416] Version: v1.20.0
	Feb 11 03:37:43 old-k8s-version-244815 kubelet[7170]: I0211 03:37:43.633152    7170 server.go:837] Client rotation is on, will bootstrap in background
	Feb 11 03:37:43 old-k8s-version-244815 kubelet[7170]: I0211 03:37:43.635004    7170 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Feb 11 03:37:43 old-k8s-version-244815 kubelet[7170]: W0211 03:37:43.635890    7170 manager.go:159] Cannot detect current cgroup on cgroup v2
	Feb 11 03:37:43 old-k8s-version-244815 kubelet[7170]: I0211 03:37:43.635983    7170 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-244815 -n old-k8s-version-244815
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 2 (214.992539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-244815" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (374.64s)

                                                
                                    

Test pass (278/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.36
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.32.1/json-events 5.2
13 TestDownloadOnly/v1.32.1/preload-exists 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.06
18 TestDownloadOnly/v1.32.1/DeleteAll 0.13
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 79.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 129.34
31 TestAddons/serial/GCPAuth/Namespaces 2.68
32 TestAddons/serial/GCPAuth/FakeCredentials 9.49
35 TestAddons/parallel/Registry 16.46
37 TestAddons/parallel/InspektorGadget 11.17
38 TestAddons/parallel/MetricsServer 7.04
40 TestAddons/parallel/CSI 66.8
41 TestAddons/parallel/Headlamp 19.68
42 TestAddons/parallel/CloudSpanner 5.61
43 TestAddons/parallel/LocalPath 59.22
44 TestAddons/parallel/NvidiaDevicePlugin 6.88
45 TestAddons/parallel/Yakd 11.95
47 TestAddons/StoppedEnableDisable 91.22
48 TestCertOptions 61.22
49 TestCertExpiration 640.02
51 TestForceSystemdFlag 100.44
52 TestForceSystemdEnv 44.96
54 TestKVMDriverInstallOrUpdate 4.16
58 TestErrorSpam/setup 40.54
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.72
61 TestErrorSpam/pause 1.51
62 TestErrorSpam/unpause 1.69
63 TestErrorSpam/stop 4.27
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 82.15
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.68
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.12
75 TestFunctional/serial/CacheCmd/cache/add_local 1.86
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
80 TestFunctional/serial/CacheCmd/cache/delete 0.09
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
83 TestFunctional/serial/ExtraConfig 36.34
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.24
86 TestFunctional/serial/LogsFileCmd 1.37
87 TestFunctional/serial/InvalidService 4.01
89 TestFunctional/parallel/ConfigCmd 0.35
90 TestFunctional/parallel/DashboardCmd 13.91
91 TestFunctional/parallel/DryRun 0.29
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.92
97 TestFunctional/parallel/ServiceCmdConnect 7.5
98 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/PersistentVolumeClaim 37.39
101 TestFunctional/parallel/SSHCmd 0.41
102 TestFunctional/parallel/CpCmd 1.45
103 TestFunctional/parallel/MySQL 28.66
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.63
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
113 TestFunctional/parallel/License 0.18
114 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
116 TestFunctional/parallel/ProfileCmd/profile_list 0.59
117 TestFunctional/parallel/Version/short 0.04
118 TestFunctional/parallel/Version/components 0.44
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
120 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
121 TestFunctional/parallel/ImageCommands/ImageListTable 0.89
122 TestFunctional/parallel/ImageCommands/ImageListJson 0.41
123 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
124 TestFunctional/parallel/ImageCommands/ImageBuild 3.85
125 TestFunctional/parallel/ImageCommands/Setup 1.67
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.81
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.78
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.6
133 TestFunctional/parallel/ImageCommands/ImageRemove 2.32
134 TestFunctional/parallel/ServiceCmd/List 0.44
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.89
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
139 TestFunctional/parallel/ServiceCmd/Format 0.29
140 TestFunctional/parallel/ServiceCmd/URL 0.29
150 TestFunctional/parallel/MountCmd/any-port 12.87
151 TestFunctional/parallel/MountCmd/specific-port 1.71
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.61
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 188.44
161 TestMultiControlPlane/serial/DeployApp 5.94
162 TestMultiControlPlane/serial/PingHostFromPods 1.1
163 TestMultiControlPlane/serial/AddWorkerNode 55.56
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
166 TestMultiControlPlane/serial/CopyFile 12.61
167 TestMultiControlPlane/serial/StopSecondaryNode 91.58
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
169 TestMultiControlPlane/serial/RestartSecondaryNode 49.53
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 454.83
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.21
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.61
174 TestMultiControlPlane/serial/StopCluster 272.65
175 TestMultiControlPlane/serial/RestartCluster 119.55
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.6
177 TestMultiControlPlane/serial/AddSecondaryNode 75.56
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
182 TestJSONOutput/start/Command 48.54
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.69
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.61
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 7.36
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.2
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 87.65
214 TestMountStart/serial/StartWithMountFirst 27.07
215 TestMountStart/serial/VerifyMountFirst 0.36
216 TestMountStart/serial/StartWithMountSecond 30.86
217 TestMountStart/serial/VerifyMountSecond 0.38
218 TestMountStart/serial/DeleteFirst 0.67
219 TestMountStart/serial/VerifyMountPostDelete 0.37
220 TestMountStart/serial/Stop 1.55
221 TestMountStart/serial/RestartStopped 24.1
222 TestMountStart/serial/VerifyMountPostStop 0.38
225 TestMultiNode/serial/FreshStart2Nodes 116.15
226 TestMultiNode/serial/DeployApp2Nodes 5.15
227 TestMultiNode/serial/PingHostFrom2Pods 0.73
228 TestMultiNode/serial/AddNode 50.89
229 TestMultiNode/serial/MultiNodeLabels 0.06
230 TestMultiNode/serial/ProfileList 0.55
231 TestMultiNode/serial/CopyFile 7.04
232 TestMultiNode/serial/StopNode 2.23
233 TestMultiNode/serial/StartAfterStop 38.86
234 TestMultiNode/serial/RestartKeepsNodes 340.48
235 TestMultiNode/serial/DeleteNode 2.6
236 TestMultiNode/serial/StopMultiNode 181.83
237 TestMultiNode/serial/RestartMultiNode 115.11
238 TestMultiNode/serial/ValidateNameConflict 53
245 TestScheduledStopUnix 114.52
249 TestRunningBinaryUpgrade 228.71
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 94.06
256 TestNoKubernetes/serial/StartWithStopK8s 70.01
257 TestNoKubernetes/serial/Start 49.46
268 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
273 TestNetworkPlugins/group/false 3.25
274 TestNoKubernetes/serial/ProfileList 1.33
275 TestNoKubernetes/serial/Stop 1.3
276 TestNoKubernetes/serial/StartNoArgs 24.29
281 TestPause/serial/Start 92.95
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
283 TestStoppedBinaryUpgrade/Setup 0.43
284 TestStoppedBinaryUpgrade/Upgrade 141.01
285 TestPause/serial/SecondStartNoReconfiguration 41.91
286 TestPause/serial/Pause 0.73
287 TestPause/serial/VerifyStatus 0.25
288 TestPause/serial/Unpause 0.7
289 TestPause/serial/PauseAgain 2.07
290 TestPause/serial/DeletePaused 1.37
291 TestPause/serial/VerifyDeletedResources 0.66
294 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
296 TestStartStop/group/no-preload/serial/FirstStart 80.74
297 TestStartStop/group/no-preload/serial/DeployApp 9.28
298 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
299 TestStartStop/group/no-preload/serial/Stop 90.98
300 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
301 TestStartStop/group/no-preload/serial/SecondStart 310.33
303 TestStartStop/group/embed-certs/serial/FirstStart 56.94
306 TestStartStop/group/embed-certs/serial/DeployApp 9.31
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
308 TestStartStop/group/embed-certs/serial/Stop 91.21
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.69
311 TestStartStop/group/old-k8s-version/serial/Stop 2.29
312 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
314 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/embed-certs/serial/SecondStart 336.39
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
318 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.01
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 368.51
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
324 TestStartStop/group/no-preload/serial/Pause 2.77
326 TestStartStop/group/newest-cni/serial/FirstStart 52.69
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
329 TestStartStop/group/newest-cni/serial/Stop 10.41
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
331 TestStartStop/group/newest-cni/serial/SecondStart 36.65
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
335 TestStartStop/group/newest-cni/serial/Pause 2.29
336 TestNetworkPlugins/group/auto/Start 50.37
337 TestNetworkPlugins/group/auto/KubeletFlags 0.2
338 TestNetworkPlugins/group/auto/NetCatPod 10.23
339 TestNetworkPlugins/group/auto/DNS 25.76
340 TestNetworkPlugins/group/auto/Localhost 0.19
341 TestNetworkPlugins/group/auto/HairPin 0.11
342 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13
343 TestNetworkPlugins/group/kindnet/Start 64.96
344 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
345 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
346 TestStartStop/group/embed-certs/serial/Pause 2.54
347 TestNetworkPlugins/group/calico/Start 83.55
348 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
349 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
350 TestNetworkPlugins/group/kindnet/NetCatPod 13.26
351 TestNetworkPlugins/group/kindnet/DNS 0.15
352 TestNetworkPlugins/group/kindnet/Localhost 0.12
353 TestNetworkPlugins/group/kindnet/HairPin 0.12
354 TestNetworkPlugins/group/custom-flannel/Start 70.82
355 TestNetworkPlugins/group/calico/ControllerPod 6
356 TestNetworkPlugins/group/calico/KubeletFlags 0.22
357 TestNetworkPlugins/group/calico/NetCatPod 10.24
358 TestNetworkPlugins/group/calico/DNS 0.13
359 TestNetworkPlugins/group/calico/Localhost 0.17
360 TestNetworkPlugins/group/calico/HairPin 0.11
361 TestNetworkPlugins/group/enable-default-cni/Start 61.51
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
364 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
365 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
366 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.53
367 TestNetworkPlugins/group/flannel/Start 73.99
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.99
370 TestNetworkPlugins/group/custom-flannel/DNS 0.14
371 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
372 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
373 TestNetworkPlugins/group/bridge/Start 54.79
374 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
375 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.23
376 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
377 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
378 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
379 TestNetworkPlugins/group/flannel/ControllerPod 6.01
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
381 TestNetworkPlugins/group/flannel/NetCatPod 11.21
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
383 TestNetworkPlugins/group/bridge/NetCatPod 10.25
384 TestNetworkPlugins/group/flannel/DNS 0.14
385 TestNetworkPlugins/group/flannel/Localhost 0.11
386 TestNetworkPlugins/group/flannel/HairPin 0.12
387 TestNetworkPlugins/group/bridge/DNS 20.79
388 TestNetworkPlugins/group/bridge/Localhost 0.12
389 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (8.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-521523 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-521523 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.358533587s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0211 02:01:56.965509   19645 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0211 02:01:56.965603   19645 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-521523
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-521523: exit status 85 (59.417488ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-521523 | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC |          |
	|         | -p download-only-521523        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 02:01:48
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 02:01:48.645718   19657 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:01:48.645820   19657 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:01:48.645832   19657 out.go:358] Setting ErrFile to fd 2...
	I0211 02:01:48.645838   19657 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:01:48.645998   19657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	W0211 02:01:48.646111   19657 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20400-12456/.minikube/config/config.json: open /home/jenkins/minikube-integration/20400-12456/.minikube/config/config.json: no such file or directory
	I0211 02:01:48.646693   19657 out.go:352] Setting JSON to true
	I0211 02:01:48.647688   19657 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2660,"bootTime":1739236649,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:01:48.647783   19657 start.go:139] virtualization: kvm guest
	I0211 02:01:48.650356   19657 out.go:97] [download-only-521523] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0211 02:01:48.650485   19657 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball: no such file or directory
	I0211 02:01:48.650517   19657 notify.go:220] Checking for updates...
	I0211 02:01:48.651777   19657 out.go:169] MINIKUBE_LOCATION=20400
	I0211 02:01:48.653050   19657 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:01:48.654278   19657 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 02:01:48.655537   19657 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 02:01:48.656688   19657 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0211 02:01:48.658904   19657 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0211 02:01:48.659160   19657 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:01:48.755694   19657 out.go:97] Using the kvm2 driver based on user configuration
	I0211 02:01:48.755719   19657 start.go:297] selected driver: kvm2
	I0211 02:01:48.755726   19657 start.go:901] validating driver "kvm2" against <nil>
	I0211 02:01:48.756048   19657 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 02:01:48.756183   19657 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20400-12456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0211 02:01:48.770583   19657 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0211 02:01:48.770635   19657 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0211 02:01:48.771206   19657 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0211 02:01:48.771380   19657 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0211 02:01:48.771416   19657 cni.go:84] Creating CNI manager for ""
	I0211 02:01:48.771476   19657 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 02:01:48.771486   19657 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0211 02:01:48.771560   19657 start.go:340] cluster config:
	{Name:download-only-521523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-521523 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:01:48.771753   19657 iso.go:125] acquiring lock: {Name:mkf866c6e52b4efa55cc59a9f329105471716f9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 02:01:48.773351   19657 out.go:97] Downloading VM boot image ...
	I0211 02:01:48.773379   19657 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso
	I0211 02:01:51.275406   19657 out.go:97] Starting "download-only-521523" primary control-plane node in "download-only-521523" cluster
	I0211 02:01:51.275431   19657 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0211 02:01:51.296574   19657 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0211 02:01:51.296608   19657 cache.go:56] Caching tarball of preloaded images
	I0211 02:01:51.296774   19657 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0211 02:01:51.298655   19657 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0211 02:01:51.298673   19657 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0211 02:01:51.321456   19657 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-521523 host does not exist
	  To start a cluster, run: "minikube start -p download-only-521523"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-521523
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (5.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-869004 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-869004 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.202702298s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (5.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0211 02:02:02.482944   19645 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
I0211 02:02:02.482991   19645 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-869004
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-869004: exit status 85 (60.460765ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-521523 | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC |                     |
	|         | -p download-only-521523        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC | 11 Feb 25 02:01 UTC |
	| delete  | -p download-only-521523        | download-only-521523 | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC | 11 Feb 25 02:01 UTC |
	| start   | -o=json --download-only        | download-only-869004 | jenkins | v1.35.0 | 11 Feb 25 02:01 UTC |                     |
	|         | -p download-only-869004        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/11 02:01:57
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0211 02:01:57.320878   19848 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:01:57.321002   19848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:01:57.321012   19848 out.go:358] Setting ErrFile to fd 2...
	I0211 02:01:57.321017   19848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:01:57.321232   19848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 02:01:57.321813   19848 out.go:352] Setting JSON to true
	I0211 02:01:57.322635   19848 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":2668,"bootTime":1739236649,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:01:57.322728   19848 start.go:139] virtualization: kvm guest
	I0211 02:01:57.324791   19848 out.go:97] [download-only-869004] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:01:57.324947   19848 notify.go:220] Checking for updates...
	I0211 02:01:57.326324   19848 out.go:169] MINIKUBE_LOCATION=20400
	I0211 02:01:57.327567   19848 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:01:57.328713   19848 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 02:01:57.329784   19848 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 02:01:57.330957   19848 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0211 02:01:57.333905   19848 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0211 02:01:57.334144   19848 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:01:57.365383   19848 out.go:97] Using the kvm2 driver based on user configuration
	I0211 02:01:57.365426   19848 start.go:297] selected driver: kvm2
	I0211 02:01:57.365434   19848 start.go:901] validating driver "kvm2" against <nil>
	I0211 02:01:57.365768   19848 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 02:01:57.365853   19848 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20400-12456/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0211 02:01:57.380825   19848 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.35.0
	I0211 02:01:57.380876   19848 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0211 02:01:57.381349   19848 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0211 02:01:57.381493   19848 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0211 02:01:57.381516   19848 cni.go:84] Creating CNI manager for ""
	I0211 02:01:57.381560   19848 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0211 02:01:57.381569   19848 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0211 02:01:57.381618   19848 start.go:340] cluster config:
	{Name:download-only-869004 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:download-only-869004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:01:57.381701   19848 iso.go:125] acquiring lock: {Name:mkf866c6e52b4efa55cc59a9f329105471716f9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0211 02:01:57.383351   19848 out.go:97] Starting "download-only-869004" primary control-plane node in "download-only-869004" cluster
	I0211 02:01:57.383368   19848 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:01:57.406494   19848 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0211 02:01:57.406512   19848 cache.go:56] Caching tarball of preloaded images
	I0211 02:01:57.406631   19848 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:01:57.408285   19848 out.go:97] Downloading Kubernetes v1.32.1 preload ...
	I0211 02:01:57.408299   19848 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0211 02:01:57.430492   19848 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.1/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2af56a340efcc3949401b47b9a5d537 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4
	I0211 02:02:00.789851   19848 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0211 02:02:00.789938   19848 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20400-12456/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-cri-o-overlay-amd64.tar.lz4 ...
	I0211 02:02:01.538908   19848 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on crio
	I0211 02:02:01.539237   19848 profile.go:143] Saving config to /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/download-only-869004/config.json ...
	I0211 02:02:01.539266   19848 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/download-only-869004/config.json: {Name:mk50b49d4689253c61e687db6ea798b23f5fc0e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0211 02:02:01.539407   19848 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime crio
	I0211 02:02:01.539565   19848 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20400-12456/.minikube/cache/linux/amd64/v1.32.1/kubectl
	
	
	* The control-plane node download-only-869004 host does not exist
	  To start a cluster, run: "minikube start -p download-only-869004"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-869004
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0211 02:02:03.056502   19645 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-945729 --alsologtostderr --binary-mirror http://127.0.0.1:40719 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-945729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-945729
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (79.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-330489 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-330489 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m18.777585904s)
helpers_test.go:175: Cleaning up "offline-crio-330489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-330489
--- PASS: TestOffline (79.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-046133
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-046133: exit status 85 (50.387015ms)

                                                
                                                
-- stdout --
	* Profile "addons-046133" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-046133"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-046133
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-046133: exit status 85 (50.713894ms)

                                                
                                                
-- stdout --
	* Profile "addons-046133" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-046133"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (129.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-046133 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-046133 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.34300901s)
--- PASS: TestAddons/Setup (129.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.68s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-046133 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-046133 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-046133 get secret gcp-auth -n new-namespace: exit status 1 (163.799052ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-046133 logs -l app=gcp-auth -n gcp-auth
I0211 02:04:13.616383   19645 retry.go:31] will retry after 2.339774466s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2025/02/11 02:04:12 GCP Auth Webhook started!
	2025/02/11 02:04:13 Ready to marshal response ...
	2025/02/11 02:04:13 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-046133 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.68s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-046133 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-046133 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9e67be71-2587-40c9-87e6-ff6a660a4097] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9e67be71-2587-40c9-87e6-ff6a660a4097] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004091637s
addons_test.go:633: (dbg) Run:  kubectl --context addons-046133 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-046133 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-046133 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.442109ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-7ggp5" [19abba60-f7d5-44ce-9bd4-39e4c503abf4] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002593279s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zwrhv" [7949f39c-a8cd-4280-b842-e053bd5eaf1f] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003465459s
addons_test.go:331: (dbg) Run:  kubectl --context addons-046133 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-046133 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-046133 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.415446922s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 ip
2025/02/11 02:04:49 [DEBUG] GET http://192.168.39.211:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.46s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.17s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-m8t9l" [8e161157-c591-4f60-9b8e-6fd66eb41c62] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003958511s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-046133 addons disable inspektor-gadget --alsologtostderr -v=1: (6.163486409s)
--- PASS: TestAddons/parallel/InspektorGadget (11.17s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.04s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 7.736563ms
I0211 02:04:33.865940   19645 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0211 02:04:33.865962   19645 kapi.go:107] duration metric: took 8.076361ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-c4gg7" [3bfecb29-117e-4bcd-9ef8-a1dd75da6f28] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00218097s
addons_test.go:402: (dbg) Run:  kubectl --context addons-046133 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.04s)

                                                
                                    
x
+
TestAddons/parallel/CSI (66.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.084857ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-046133 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-046133 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6b033033-fbb0-456e-9b00-81a4aec5bf8d] Pending
helpers_test.go:344: "task-pv-pod" [6b033033-fbb0-456e-9b00-81a4aec5bf8d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6b033033-fbb0-456e-9b00-81a4aec5bf8d] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003530637s
addons_test.go:511: (dbg) Run:  kubectl --context addons-046133 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-046133 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-046133 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-046133 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-046133 delete pod task-pv-pod: (1.303811568s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-046133 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-046133 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-046133 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b4b4d3d8-627f-4543-8aeb-1e54293c491c] Pending
helpers_test.go:344: "task-pv-pod-restore" [b4b4d3d8-627f-4543-8aeb-1e54293c491c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b4b4d3d8-627f-4543-8aeb-1e54293c491c] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003486745s
addons_test.go:553: (dbg) Run:  kubectl --context addons-046133 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-046133 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-046133 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-046133 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.712607787s)
--- PASS: TestAddons/parallel/CSI (66.80s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-046133 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-hc2tk" [f54ffa5f-0a3c-45eb-a8b3-ec4fff79441e] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-hc2tk" [f54ffa5f-0a3c-45eb-a8b3-ec4fff79441e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-hc2tk" [f54ffa5f-0a3c-45eb-a8b3-ec4fff79441e] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004699414s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-046133 addons disable headlamp --alsologtostderr -v=1: (5.705620578s)
--- PASS: TestAddons/parallel/Headlamp (19.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-kxc9k" [d4b7ab8d-a65e-4e5d-a3ba-39414a4b2648] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008597449s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.22s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-046133 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-046133 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-046133 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ed9dc4fe-2a73-402e-b8cf-9f59fb6b2cde] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ed9dc4fe-2a73-402e-b8cf-9f59fb6b2cde] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ed9dc4fe-2a73-402e-b8cf-9f59fb6b2cde] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.003517481s
addons_test.go:906: (dbg) Run:  kubectl --context addons-046133 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 ssh "cat /opt/local-path-provisioner/pvc-cc30bfbf-dfc2-43dd-a5a7-18400646de0d_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-046133 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-046133 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-046133 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.38024084s)
--- PASS: TestAddons/parallel/LocalPath (59.22s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-j9p8p" [0fcd8b34-feb0-44f0-830d-b4d79aa89065] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003745432s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.88s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
I0211 02:04:33.857898   19645 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-n6mxc" [a8ad2221-1da7-45a3-a99c-0dcfe91aa9d2] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00302347s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-046133 addons disable yakd --alsologtostderr -v=1: (5.94297852s)
--- PASS: TestAddons/parallel/Yakd (11.95s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-046133
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-046133: (1m30.946798189s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-046133
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-046133
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-046133
--- PASS: TestAddons/StoppedEnableDisable (91.22s)

                                                
                                    
x
+
TestCertOptions (61.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-953939 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0211 03:04:16.210478   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-953939 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (59.766854448s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-953939 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-953939 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-953939 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-953939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-953939
--- PASS: TestCertOptions (61.22s)

                                                
                                    
x
+
TestCertExpiration (640.02s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-411526 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-411526 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m31.321762396s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-411526 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-411526 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (6m7.694084155s)
helpers_test.go:175: Cleaning up "cert-expiration-411526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-411526
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-411526: (1.003809751s)
--- PASS: TestCertExpiration (640.02s)

                                                
                                    
x
+
TestForceSystemdFlag (100.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-660198 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0211 03:02:23.754928   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-660198 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m39.444575989s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-660198 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-660198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-660198
--- PASS: TestForceSystemdFlag (100.44s)

                                                
                                    
x
+
TestForceSystemdEnv (44.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-565593 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-565593 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.17749883s)
helpers_test.go:175: Cleaning up "force-systemd-env-565593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-565593
--- PASS: TestForceSystemdEnv (44.96s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.16s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0211 03:05:00.843993   19645 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0211 03:05:00.844150   19645 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0211 03:05:00.877238   19645 install.go:62] docker-machine-driver-kvm2: exit status 1
W0211 03:05:00.877715   19645 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0211 03:05:00.877810   19645 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1841795490/001/docker-machine-driver-kvm2
I0211 03:05:01.118983   19645 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1841795490/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5494820 0x5494820 0x5494820 0x5494820 0x5494820 0x5494820 0x5494820] Decompressors:map[bz2:0xc0005aca88 gz:0xc0005acb20 tar:0xc0005acac0 tar.bz2:0xc0005acad0 tar.gz:0xc0005acae0 tar.xz:0xc0005acaf0 tar.zst:0xc0005acb10 tbz2:0xc0005acad0 tgz:0xc0005acae0 txz:0xc0005acaf0 tzst:0xc0005acb10 xz:0xc0005acb28 zip:0xc0005acb40 zst:0xc0005acb50] Getters:map[file:0xc001caa870 http:0xc00099a280 https:0xc00099a2d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0211 03:05:01.119046   19645 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1841795490/001/docker-machine-driver-kvm2
I0211 03:05:03.183052   19645 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0211 03:05:03.183131   19645 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0211 03:05:03.209309   19645 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0211 03:05:03.209338   19645 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0211 03:05:03.209399   19645 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0211 03:05:03.209422   19645 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1841795490/002/docker-machine-driver-kvm2
I0211 03:05:03.256659   19645 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1841795490/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5494820 0x5494820 0x5494820 0x5494820 0x5494820 0x5494820 0x5494820] Decompressors:map[bz2:0xc0005aca88 gz:0xc0005acb20 tar:0xc0005acac0 tar.bz2:0xc0005acad0 tar.gz:0xc0005acae0 tar.xz:0xc0005acaf0 tar.zst:0xc0005acb10 tbz2:0xc0005acad0 tgz:0xc0005acae0 txz:0xc0005acaf0 tzst:0xc0005acb10 xz:0xc0005acb28 zip:0xc0005acb40 zst:0xc0005acb50] Getters:map[file:0xc0008075b0 http:0xc000906c80 https:0xc000906cd0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0211 03:05:03.256711   19645 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1841795490/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.16s)

                                                
                                    
x
+
TestErrorSpam/setup (40.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-437484 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-437484 --driver=kvm2  --container-runtime=crio
E0211 02:09:16.210465   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:09:16.222216   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:09:16.234003   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:09:16.255320   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:09:16.296658   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:09:16.378077   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:09:16.539587   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:09:16.861232   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:09:17.503218   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:09:18.784607   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:09:21.346015   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:09:26.467382   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-437484 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-437484 --driver=kvm2  --container-runtime=crio: (40.541992642s)
--- PASS: TestErrorSpam/setup (40.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (4.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 stop
E0211 02:09:36.709073   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 stop: (1.611874606s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 stop: (1.356972806s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-437484 --log_dir /tmp/nospam-437484 stop: (1.302356816s)
--- PASS: TestErrorSpam/stop (4.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20400-12456/.minikube/files/etc/test/nested/copy/19645/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.15s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-454298 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0211 02:09:57.191251   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:10:38.154456   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-454298 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m22.153674238s)
--- PASS: TestFunctional/serial/StartWithProxy (82.15s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0211 02:11:03.487070   19645 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-454298 --alsologtostderr -v=8
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-454298 --alsologtostderr -v=8: (29.67969211s)
functional_test.go:680: soft start took 29.680450059s for "functional-454298" cluster.
I0211 02:11:33.167152   19645 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (29.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-454298 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-454298 cache add registry.k8s.io/pause:3.1: (1.014622256s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-454298 cache add registry.k8s.io/pause:3.3: (1.054598858s)
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-linux-amd64 -p functional-454298 cache add registry.k8s.io/pause:latest: (1.048639572s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-454298 /tmp/TestFunctionalserialCacheCmdcacheadd_local2450771152/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 cache add minikube-local-cache-test:functional-454298
functional_test.go:1106: (dbg) Done: out/minikube-linux-amd64 -p functional-454298 cache add minikube-local-cache-test:functional-454298: (1.548768261s)
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 cache delete minikube-local-cache-test:functional-454298
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-454298
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454298 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (210.763147ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 kubectl -- --context functional-454298 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-454298 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.34s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-454298 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0211 02:12:00.079560   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-454298 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.3399027s)
functional_test.go:778: restart took 36.340020021s for "functional-454298" cluster.
I0211 02:12:16.861642   19645 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (36.34s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-454298 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 logs
functional_test.go:1253: (dbg) Done: out/minikube-linux-amd64 -p functional-454298 logs: (1.244569135s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 logs --file /tmp/TestFunctionalserialLogsFileCmd3313661514/001/logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-linux-amd64 -p functional-454298 logs --file /tmp/TestFunctionalserialLogsFileCmd3313661514/001/logs.txt: (1.371736552s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-454298 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-454298
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-454298: exit status 115 (253.332016ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.88:30318 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-454298 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454298 config get cpus: exit status 14 (63.016404ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454298 config get cpus: exit status 14 (58.174442ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-454298 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-454298 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 26869: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.91s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-454298 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-454298 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (134.021353ms)

                                                
                                                
-- stdout --
	* [functional-454298] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:12:24.092633   26334 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:12:24.092808   26334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:12:24.092818   26334 out.go:358] Setting ErrFile to fd 2...
	I0211 02:12:24.092822   26334 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:12:24.093101   26334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 02:12:24.093695   26334 out.go:352] Setting JSON to false
	I0211 02:12:24.094557   26334 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3295,"bootTime":1739236649,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:12:24.094649   26334 start.go:139] virtualization: kvm guest
	I0211 02:12:24.096790   26334 out.go:177] * [functional-454298] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 02:12:24.098167   26334 notify.go:220] Checking for updates...
	I0211 02:12:24.098189   26334 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 02:12:24.099643   26334 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:12:24.100813   26334 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 02:12:24.101914   26334 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 02:12:24.103094   26334 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 02:12:24.104120   26334 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 02:12:24.105841   26334 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:12:24.106681   26334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:12:24.106755   26334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:12:24.121704   26334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40241
	I0211 02:12:24.122042   26334 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:12:24.122566   26334 main.go:141] libmachine: Using API Version  1
	I0211 02:12:24.122597   26334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:12:24.122964   26334 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:12:24.123180   26334 main.go:141] libmachine: (functional-454298) Calling .DriverName
	I0211 02:12:24.123435   26334 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:12:24.123717   26334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:12:24.123749   26334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:12:24.137720   26334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40209
	I0211 02:12:24.138118   26334 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:12:24.138578   26334 main.go:141] libmachine: Using API Version  1
	I0211 02:12:24.138599   26334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:12:24.138901   26334 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:12:24.139061   26334 main.go:141] libmachine: (functional-454298) Calling .DriverName
	I0211 02:12:24.170981   26334 out.go:177] * Using the kvm2 driver based on existing profile
	I0211 02:12:24.172146   26334 start.go:297] selected driver: kvm2
	I0211 02:12:24.172162   26334 start.go:901] validating driver "kvm2" against &{Name:functional-454298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-454298 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.88 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:12:24.172296   26334 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:12:24.174330   26334 out.go:201] 
	W0211 02:12:24.175590   26334 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0211 02:12:24.176813   26334 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-454298 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-454298 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-454298 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (148.572ms)

                                                
                                                
-- stdout --
	* [functional-454298] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:12:23.952410   26276 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:12:23.952499   26276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:12:23.952507   26276 out.go:358] Setting ErrFile to fd 2...
	I0211 02:12:23.952511   26276 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:12:23.952786   26276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 02:12:23.953230   26276 out.go:352] Setting JSON to false
	I0211 02:12:23.954057   26276 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3295,"bootTime":1739236649,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 02:12:23.954118   26276 start.go:139] virtualization: kvm guest
	I0211 02:12:23.956158   26276 out.go:177] * [functional-454298] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0211 02:12:23.957516   26276 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 02:12:23.957517   26276 notify.go:220] Checking for updates...
	I0211 02:12:23.959768   26276 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 02:12:23.961039   26276 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 02:12:23.962241   26276 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 02:12:23.963439   26276 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 02:12:23.964748   26276 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 02:12:23.966577   26276 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:12:23.967173   26276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:12:23.967233   26276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:12:23.982844   26276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34357
	I0211 02:12:23.983249   26276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:12:23.983872   26276 main.go:141] libmachine: Using API Version  1
	I0211 02:12:23.983897   26276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:12:23.984224   26276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:12:23.984446   26276 main.go:141] libmachine: (functional-454298) Calling .DriverName
	I0211 02:12:23.984665   26276 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 02:12:23.984988   26276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:12:23.985032   26276 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:12:24.001278   26276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I0211 02:12:24.001740   26276 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:12:24.002362   26276 main.go:141] libmachine: Using API Version  1
	I0211 02:12:24.002395   26276 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:12:24.002694   26276 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:12:24.002867   26276 main.go:141] libmachine: (functional-454298) Calling .DriverName
	I0211 02:12:24.037131   26276 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0211 02:12:24.038459   26276 start.go:297] selected driver: kvm2
	I0211 02:12:24.038477   26276 start.go:901] validating driver "kvm2" against &{Name:functional-454298 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:functional-454298 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.88 Port:8441 KubernetesVersion:v1.32.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0211 02:12:24.038585   26276 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 02:12:24.040430   26276 out.go:201] 
	W0211 02:12:24.041626   26276 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0211 02:12:24.042703   26276 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-454298 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-454298 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-dd8nf" [17a16317-125c-47f3-87ab-df594eb489eb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-dd8nf" [17a16317-125c-47f3-87ab-df594eb489eb] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004397991s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.50.88:31803
functional_test.go:1692: http://192.168.50.88:31803: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-dd8nf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.88:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.88:31803
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.50s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b8ce4b50-cf29-4e27-be4f-2e6403d856b3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003538257s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-454298 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-454298 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-454298 get pvc myclaim -o=json
I0211 02:12:34.357252   19645 retry.go:31] will retry after 1.777843257s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:8a2d3135-110c-4074-8e75-181737dfa942 ResourceVersion:776 Generation:0 CreationTimestamp:2025-02-11 02:12:33 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001778470 VolumeMode:0xc001778480 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-454298 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-454298 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2a716420-11e6-4853-989a-3a519b1a8df4] Pending
helpers_test.go:344: "sp-pod" [2a716420-11e6-4853-989a-3a519b1a8df4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2a716420-11e6-4853-989a-3a519b1a8df4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003642039s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-454298 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-454298 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-454298 delete -f testdata/storage-provisioner/pod.yaml: (2.175853327s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-454298 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6ced6a74-6201-449c-b177-226bc2f98773] Pending
helpers_test.go:344: "sp-pod" [6ced6a74-6201-449c-b177-226bc2f98773] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6ced6a74-6201-449c-b177-226bc2f98773] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003047823s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-454298 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.39s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh -n functional-454298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 cp functional-454298:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2888818408/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh -n functional-454298 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh -n functional-454298 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-454298 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-fckfk" [f88d20e2-190e-4eaa-b71c-9f87c62ef165] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-fckfk" [f88d20e2-190e-4eaa-b71c-9f87c62ef165] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.01670175s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-454298 exec mysql-58ccfd96bb-fckfk -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-454298 exec mysql-58ccfd96bb-fckfk -- mysql -ppassword -e "show databases;": exit status 1 (133.996365ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0211 02:13:04.613173   19645 retry.go:31] will retry after 764.255107ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-454298 exec mysql-58ccfd96bb-fckfk -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-454298 exec mysql-58ccfd96bb-fckfk -- mysql -ppassword -e "show databases;": exit status 1 (108.296175ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0211 02:13:05.486838   19645 retry.go:31] will retry after 1.312210613s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-454298 exec mysql-58ccfd96bb-fckfk -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.66s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/19645/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "sudo cat /etc/test/nested/copy/19645/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/19645.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "sudo cat /etc/ssl/certs/19645.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/19645.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "sudo cat /usr/share/ca-certificates/19645.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/196452.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "sudo cat /etc/ssl/certs/196452.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/196452.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "sudo cat /usr/share/ca-certificates/196452.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-454298 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "sudo systemctl is-active docker"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454298 ssh "sudo systemctl is-active docker": exit status 1 (308.955759ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "sudo systemctl is-active containerd"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454298 ssh "sudo systemctl is-active containerd": exit status 1 (297.614435ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-454298 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-454298 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-2jlxb" [f5858b40-b9d9-4dc3-8d10-4fb2a4c92908] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-2jlxb" [f5858b40-b9d9-4dc3-8d10-4fb2a4c92908] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00877712s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "523.751261ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "63.210579ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "472.11919ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "58.193163ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-454298 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-454298
localhost/kicbase/echo-server:functional-454298
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20241108-5c6d2daf
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-454298 image ls --format short --alsologtostderr:
I0211 02:12:45.319081   28108 out.go:345] Setting OutFile to fd 1 ...
I0211 02:12:45.319182   28108 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:12:45.319190   28108 out.go:358] Setting ErrFile to fd 2...
I0211 02:12:45.319195   28108 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:12:45.319348   28108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
I0211 02:12:45.319918   28108 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:12:45.320012   28108 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:12:45.320355   28108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0211 02:12:45.320411   28108 main.go:141] libmachine: Launching plugin server for driver kvm2
I0211 02:12:45.335804   28108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44329
I0211 02:12:45.336231   28108 main.go:141] libmachine: () Calling .GetVersion
I0211 02:12:45.336721   28108 main.go:141] libmachine: Using API Version  1
I0211 02:12:45.336744   28108 main.go:141] libmachine: () Calling .SetConfigRaw
I0211 02:12:45.337135   28108 main.go:141] libmachine: () Calling .GetMachineName
I0211 02:12:45.337336   28108 main.go:141] libmachine: (functional-454298) Calling .GetState
I0211 02:12:45.339072   28108 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0211 02:12:45.339115   28108 main.go:141] libmachine: Launching plugin server for driver kvm2
I0211 02:12:45.354274   28108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34131
I0211 02:12:45.354647   28108 main.go:141] libmachine: () Calling .GetVersion
I0211 02:12:45.355267   28108 main.go:141] libmachine: Using API Version  1
I0211 02:12:45.355298   28108 main.go:141] libmachine: () Calling .SetConfigRaw
I0211 02:12:45.355596   28108 main.go:141] libmachine: () Calling .GetMachineName
I0211 02:12:45.355813   28108 main.go:141] libmachine: (functional-454298) Calling .DriverName
I0211 02:12:45.355997   28108 ssh_runner.go:195] Run: systemctl --version
I0211 02:12:45.356025   28108 main.go:141] libmachine: (functional-454298) Calling .GetSSHHostname
I0211 02:12:45.359092   28108 main.go:141] libmachine: (functional-454298) DBG | domain functional-454298 has defined MAC address 52:54:00:ec:84:6c in network mk-functional-454298
I0211 02:12:45.359527   28108 main.go:141] libmachine: (functional-454298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:84:6c", ip: ""} in network mk-functional-454298: {Iface:virbr1 ExpiryTime:2025-02-11 03:09:55 +0000 UTC Type:0 Mac:52:54:00:ec:84:6c Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:functional-454298 Clientid:01:52:54:00:ec:84:6c}
I0211 02:12:45.359560   28108 main.go:141] libmachine: (functional-454298) DBG | domain functional-454298 has defined IP address 192.168.50.88 and MAC address 52:54:00:ec:84:6c in network mk-functional-454298
I0211 02:12:45.359705   28108 main.go:141] libmachine: (functional-454298) Calling .GetSSHPort
I0211 02:12:45.359900   28108 main.go:141] libmachine: (functional-454298) Calling .GetSSHKeyPath
I0211 02:12:45.360055   28108 main.go:141] libmachine: (functional-454298) Calling .GetSSHUsername
I0211 02:12:45.360198   28108 sshutil.go:53] new ssh client: &{IP:192.168.50.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/functional-454298/id_rsa Username:docker}
I0211 02:12:45.470808   28108 ssh_runner.go:195] Run: sudo crictl images --output json
I0211 02:12:45.529869   28108 main.go:141] libmachine: Making call to close driver server
I0211 02:12:45.529885   28108 main.go:141] libmachine: (functional-454298) Calling .Close
I0211 02:12:45.530163   28108 main.go:141] libmachine: Successfully made call to close driver server
I0211 02:12:45.530178   28108 main.go:141] libmachine: Making call to close connection to plugin binary
I0211 02:12:45.530195   28108 main.go:141] libmachine: Making call to close driver server
I0211 02:12:45.530195   28108 main.go:141] libmachine: (functional-454298) DBG | Closing plugin on server side
I0211 02:12:45.530202   28108 main.go:141] libmachine: (functional-454298) Calling .Close
I0211 02:12:45.530403   28108 main.go:141] libmachine: Successfully made call to close driver server
I0211 02:12:45.530427   28108 main.go:141] libmachine: Making call to close connection to plugin binary
I0211 02:12:45.530455   28108 main.go:141] libmachine: (functional-454298) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-454298 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 50415e5d05f05 | 95MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| localhost/kicbase/echo-server           | functional-454298  | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-454298  | 7dc5fe6fd815c | 1.47MB |
| registry.k8s.io/etcd                    | 3.5.16-0           | a9e7e6b294baf | 151MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-454298  | 9a2fd165efcdb | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.32.1            | 95c0bda56fc4d | 98.1MB |
| registry.k8s.io/kube-controller-manager | v1.32.1            | 019ee182b58e2 | 90.8MB |
| registry.k8s.io/kube-proxy              | v1.32.1            | e29f9c7391fd9 | 95.3MB |
| registry.k8s.io/kube-scheduler          | v1.32.1            | 2b0d6572d062c | 70.6MB |
| docker.io/library/nginx                 | latest             | 97662d24417b3 | 196MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-454298 image ls --format table --alsologtostderr:
I0211 02:12:50.063071   28446 out.go:345] Setting OutFile to fd 1 ...
I0211 02:12:50.063206   28446 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:12:50.063217   28446 out.go:358] Setting ErrFile to fd 2...
I0211 02:12:50.063223   28446 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:12:50.063495   28446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
I0211 02:12:50.064294   28446 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:12:50.064444   28446 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:12:50.064953   28446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0211 02:12:50.065021   28446 main.go:141] libmachine: Launching plugin server for driver kvm2
I0211 02:12:50.079940   28446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45017
I0211 02:12:50.080468   28446 main.go:141] libmachine: () Calling .GetVersion
I0211 02:12:50.081035   28446 main.go:141] libmachine: Using API Version  1
I0211 02:12:50.081052   28446 main.go:141] libmachine: () Calling .SetConfigRaw
I0211 02:12:50.081487   28446 main.go:141] libmachine: () Calling .GetMachineName
I0211 02:12:50.081677   28446 main.go:141] libmachine: (functional-454298) Calling .GetState
I0211 02:12:50.083513   28446 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0211 02:12:50.083558   28446 main.go:141] libmachine: Launching plugin server for driver kvm2
I0211 02:12:50.098184   28446 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38759
I0211 02:12:50.098612   28446 main.go:141] libmachine: () Calling .GetVersion
I0211 02:12:50.099050   28446 main.go:141] libmachine: Using API Version  1
I0211 02:12:50.099068   28446 main.go:141] libmachine: () Calling .SetConfigRaw
I0211 02:12:50.099456   28446 main.go:141] libmachine: () Calling .GetMachineName
I0211 02:12:50.099628   28446 main.go:141] libmachine: (functional-454298) Calling .DriverName
I0211 02:12:50.099824   28446 ssh_runner.go:195] Run: systemctl --version
I0211 02:12:50.099851   28446 main.go:141] libmachine: (functional-454298) Calling .GetSSHHostname
I0211 02:12:50.102280   28446 main.go:141] libmachine: (functional-454298) DBG | domain functional-454298 has defined MAC address 52:54:00:ec:84:6c in network mk-functional-454298
I0211 02:12:50.102656   28446 main.go:141] libmachine: (functional-454298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:84:6c", ip: ""} in network mk-functional-454298: {Iface:virbr1 ExpiryTime:2025-02-11 03:09:55 +0000 UTC Type:0 Mac:52:54:00:ec:84:6c Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:functional-454298 Clientid:01:52:54:00:ec:84:6c}
I0211 02:12:50.102693   28446 main.go:141] libmachine: (functional-454298) DBG | domain functional-454298 has defined IP address 192.168.50.88 and MAC address 52:54:00:ec:84:6c in network mk-functional-454298
I0211 02:12:50.102787   28446 main.go:141] libmachine: (functional-454298) Calling .GetSSHPort
I0211 02:12:50.102942   28446 main.go:141] libmachine: (functional-454298) Calling .GetSSHKeyPath
I0211 02:12:50.103151   28446 main.go:141] libmachine: (functional-454298) Calling .GetSSHUsername
I0211 02:12:50.103260   28446 sshutil.go:53] new ssh client: &{IP:192.168.50.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/functional-454298/id_rsa Username:docker}
I0211 02:12:50.238991   28446 ssh_runner.go:195] Run: sudo crictl images --output json
I0211 02:12:50.901861   28446 main.go:141] libmachine: Making call to close driver server
I0211 02:12:50.901880   28446 main.go:141] libmachine: (functional-454298) Calling .Close
I0211 02:12:50.902161   28446 main.go:141] libmachine: (functional-454298) DBG | Closing plugin on server side
I0211 02:12:50.902208   28446 main.go:141] libmachine: Successfully made call to close driver server
I0211 02:12:50.902216   28446 main.go:141] libmachine: Making call to close connection to plugin binary
I0211 02:12:50.902228   28446 main.go:141] libmachine: Making call to close driver server
I0211 02:12:50.902235   28446 main.go:141] libmachine: (functional-454298) Calling .Close
I0211 02:12:50.902541   28446 main.go:141] libmachine: Successfully made call to close driver server
I0211 02:12:50.902554   28446 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-454298 image ls --format json --alsologtostderr:
[{"id":"97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e","repoDigests":["docker.io/library/nginx@sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f4dc7","docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34"],"repoTags":["docker.io/library/nginx:latest"],"size":"196149140"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569
338c0ac"],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"98051552"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954","registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"90793286"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["regis
try.k8s.io/pause:3.10"],"size":"742080"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-45429
8"],"size":"4943877"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":["registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5"],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"151021823"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5","registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8"],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"95271321"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f3560
92ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"c3dc3dac2cab5fb5c10fc8c4e2dd17ef114706e9a6be05250409109dd43279cf","repoDigests":["docker.io/library/2e0c3e6d89f42ef4b3d835600b7981ce80a741f46b3ba99472a6bd2a9ea01074-tmp@sha256:d1e95bfc412fb5892ecc29e7a36113e909b853b08c49535528e9a8199567c7ed"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"7dc5fe6fd815c56023fec6aacede3471f38b52bafb4378d96e633ffa07add01b","repoDigests":["localhost/my-image@sha256:1589390e85b8823ffeb29bb50cbf79651dcb6e7d8496fe39f97e1a6f09657f15"],"repoTags":["localhost/my-image:functional-454298"],"size":"1468600"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1",
"repoDigests":["registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e","registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476"],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"70649158"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e","repoDigests":["docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3","docker.io/kindest/kindnetd@sha256:e35e1050b69d
cd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"94963761"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"9a2fd165efcdb843eb317b6c2120b570932dddc57fb285eaee00a00b737b3c32","repoDigests":["localhost/minikube-local-cache-test@sha256:12de9db81ffb3a608a5cb76c69ef6a410497fd5c409c6f034c4005ef25ab2705"],"repoTags":["localhost/minikube-local-cache-test:functional-454298"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"9
7846543"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-454298 image ls --format json --alsologtostderr:
I0211 02:12:49.679727   28380 out.go:345] Setting OutFile to fd 1 ...
I0211 02:12:49.680027   28380 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:12:49.680054   28380 out.go:358] Setting ErrFile to fd 2...
I0211 02:12:49.680060   28380 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:12:49.680314   28380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
I0211 02:12:49.681064   28380 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:12:49.681223   28380 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:12:49.681696   28380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0211 02:12:49.681768   28380 main.go:141] libmachine: Launching plugin server for driver kvm2
I0211 02:12:49.696851   28380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
I0211 02:12:49.697259   28380 main.go:141] libmachine: () Calling .GetVersion
I0211 02:12:49.697832   28380 main.go:141] libmachine: Using API Version  1
I0211 02:12:49.697849   28380 main.go:141] libmachine: () Calling .SetConfigRaw
I0211 02:12:49.698167   28380 main.go:141] libmachine: () Calling .GetMachineName
I0211 02:12:49.698335   28380 main.go:141] libmachine: (functional-454298) Calling .GetState
I0211 02:12:49.699878   28380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0211 02:12:49.699918   28380 main.go:141] libmachine: Launching plugin server for driver kvm2
I0211 02:12:49.714418   28380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
I0211 02:12:49.714776   28380 main.go:141] libmachine: () Calling .GetVersion
I0211 02:12:49.715308   28380 main.go:141] libmachine: Using API Version  1
I0211 02:12:49.715339   28380 main.go:141] libmachine: () Calling .SetConfigRaw
I0211 02:12:49.715610   28380 main.go:141] libmachine: () Calling .GetMachineName
I0211 02:12:49.715798   28380 main.go:141] libmachine: (functional-454298) Calling .DriverName
I0211 02:12:49.715989   28380 ssh_runner.go:195] Run: systemctl --version
I0211 02:12:49.716015   28380 main.go:141] libmachine: (functional-454298) Calling .GetSSHHostname
I0211 02:12:49.718702   28380 main.go:141] libmachine: (functional-454298) DBG | domain functional-454298 has defined MAC address 52:54:00:ec:84:6c in network mk-functional-454298
I0211 02:12:49.719102   28380 main.go:141] libmachine: (functional-454298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:84:6c", ip: ""} in network mk-functional-454298: {Iface:virbr1 ExpiryTime:2025-02-11 03:09:55 +0000 UTC Type:0 Mac:52:54:00:ec:84:6c Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:functional-454298 Clientid:01:52:54:00:ec:84:6c}
I0211 02:12:49.719127   28380 main.go:141] libmachine: (functional-454298) DBG | domain functional-454298 has defined IP address 192.168.50.88 and MAC address 52:54:00:ec:84:6c in network mk-functional-454298
I0211 02:12:49.719318   28380 main.go:141] libmachine: (functional-454298) Calling .GetSSHPort
I0211 02:12:49.719494   28380 main.go:141] libmachine: (functional-454298) Calling .GetSSHKeyPath
I0211 02:12:49.719628   28380 main.go:141] libmachine: (functional-454298) Calling .GetSSHUsername
I0211 02:12:49.719771   28380 sshutil.go:53] new ssh client: &{IP:192.168.50.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/functional-454298/id_rsa Username:docker}
I0211 02:12:49.833123   28380 ssh_runner.go:195] Run: sudo crictl images --output json
I0211 02:12:50.010963   28380 main.go:141] libmachine: Making call to close driver server
I0211 02:12:50.010980   28380 main.go:141] libmachine: (functional-454298) Calling .Close
I0211 02:12:50.011254   28380 main.go:141] libmachine: Successfully made call to close driver server
I0211 02:12:50.011272   28380 main.go:141] libmachine: Making call to close connection to plugin binary
I0211 02:12:50.011283   28380 main.go:141] libmachine: Making call to close driver server
I0211 02:12:50.011291   28380 main.go:141] libmachine: (functional-454298) Calling .Close
I0211 02:12:50.011476   28380 main.go:141] libmachine: Successfully made call to close driver server
I0211 02:12:50.011489   28380 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-454298 image ls --format yaml --alsologtostderr:
- id: 50415e5d05f05adbdfd902507532ebb86f924dc2e05511a3b47920156ee4236e
repoDigests:
- docker.io/kindest/kindnetd@sha256:3da053f9c42d9123d34d4582cc77041c013e1419204b9ef180f0b3bffa7769e3
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "94963761"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:769a11bfd73df7db947d51b0f7a3a60383a0338904d6944cced924d33f0d7286
- registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "98051552"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5
- registry.k8s.io/kube-proxy@sha256:a739122f1b5b17e2db96006120ad5fb9a3c654da07322bcaa62263c403ef69a8
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "95271321"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests:
- registry.k8s.io/etcd@sha256:1d988b04a9476119cdbc2025ba58f6eec19554caf36edb43c357ff412d07e990
- registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "151021823"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954
- registry.k8s.io/kube-controller-manager@sha256:c9067d10dcf5ca45b2be9260f3b15e9c94e05fd8039c53341a23d3b4cf0cc619
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "90793286"
- id: 97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e
repoDigests:
- docker.io/library/nginx@sha256:088eea90c3d0a540ee5686e7d7471acbd4063b6e97eaf49b5e651665eb7f4dc7
- docker.io/library/nginx@sha256:91734281c0ebfc6f1aea979cffeed5079cfe786228a71cc6f1f46a228cde6e34
repoTags:
- docker.io/library/nginx:latest
size: "196149140"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-454298
size: "4943877"
- id: 9a2fd165efcdb843eb317b6c2120b570932dddc57fb285eaee00a00b737b3c32
repoDigests:
- localhost/minikube-local-cache-test@sha256:12de9db81ffb3a608a5cb76c69ef6a410497fd5c409c6f034c4005ef25ab2705
repoTags:
- localhost/minikube-local-cache-test:functional-454298
size: "3330"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e
- registry.k8s.io/kube-scheduler@sha256:e2b8e00ff17f8b0427e34d28897d7bf6f7a63ec48913ea01d4082ab91ca28476
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "70649158"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-454298 image ls --format yaml --alsologtostderr:
I0211 02:12:45.577184   28132 out.go:345] Setting OutFile to fd 1 ...
I0211 02:12:45.577302   28132 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:12:45.577313   28132 out.go:358] Setting ErrFile to fd 2...
I0211 02:12:45.577317   28132 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:12:45.577533   28132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
I0211 02:12:45.578114   28132 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:12:45.578215   28132 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:12:45.578560   28132 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0211 02:12:45.578628   28132 main.go:141] libmachine: Launching plugin server for driver kvm2
I0211 02:12:45.594336   28132 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33399
I0211 02:12:45.594798   28132 main.go:141] libmachine: () Calling .GetVersion
I0211 02:12:45.595376   28132 main.go:141] libmachine: Using API Version  1
I0211 02:12:45.595403   28132 main.go:141] libmachine: () Calling .SetConfigRaw
I0211 02:12:45.595716   28132 main.go:141] libmachine: () Calling .GetMachineName
I0211 02:12:45.595910   28132 main.go:141] libmachine: (functional-454298) Calling .GetState
I0211 02:12:45.598098   28132 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0211 02:12:45.598156   28132 main.go:141] libmachine: Launching plugin server for driver kvm2
I0211 02:12:45.613845   28132 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44591
I0211 02:12:45.614265   28132 main.go:141] libmachine: () Calling .GetVersion
I0211 02:12:45.614719   28132 main.go:141] libmachine: Using API Version  1
I0211 02:12:45.614739   28132 main.go:141] libmachine: () Calling .SetConfigRaw
I0211 02:12:45.615118   28132 main.go:141] libmachine: () Calling .GetMachineName
I0211 02:12:45.615337   28132 main.go:141] libmachine: (functional-454298) Calling .DriverName
I0211 02:12:45.615530   28132 ssh_runner.go:195] Run: systemctl --version
I0211 02:12:45.615570   28132 main.go:141] libmachine: (functional-454298) Calling .GetSSHHostname
I0211 02:12:45.618696   28132 main.go:141] libmachine: (functional-454298) DBG | domain functional-454298 has defined MAC address 52:54:00:ec:84:6c in network mk-functional-454298
I0211 02:12:45.619124   28132 main.go:141] libmachine: (functional-454298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:84:6c", ip: ""} in network mk-functional-454298: {Iface:virbr1 ExpiryTime:2025-02-11 03:09:55 +0000 UTC Type:0 Mac:52:54:00:ec:84:6c Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:functional-454298 Clientid:01:52:54:00:ec:84:6c}
I0211 02:12:45.619154   28132 main.go:141] libmachine: (functional-454298) DBG | domain functional-454298 has defined IP address 192.168.50.88 and MAC address 52:54:00:ec:84:6c in network mk-functional-454298
I0211 02:12:45.619313   28132 main.go:141] libmachine: (functional-454298) Calling .GetSSHPort
I0211 02:12:45.619474   28132 main.go:141] libmachine: (functional-454298) Calling .GetSSHKeyPath
I0211 02:12:45.619594   28132 main.go:141] libmachine: (functional-454298) Calling .GetSSHUsername
I0211 02:12:45.619727   28132 sshutil.go:53] new ssh client: &{IP:192.168.50.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/functional-454298/id_rsa Username:docker}
I0211 02:12:45.712793   28132 ssh_runner.go:195] Run: sudo crictl images --output json
I0211 02:12:45.751967   28132 main.go:141] libmachine: Making call to close driver server
I0211 02:12:45.751982   28132 main.go:141] libmachine: (functional-454298) Calling .Close
I0211 02:12:45.752244   28132 main.go:141] libmachine: Successfully made call to close driver server
I0211 02:12:45.752267   28132 main.go:141] libmachine: Making call to close connection to plugin binary
I0211 02:12:45.752280   28132 main.go:141] libmachine: Making call to close driver server
I0211 02:12:45.752288   28132 main.go:141] libmachine: (functional-454298) Calling .Close
I0211 02:12:45.752253   28132 main.go:141] libmachine: (functional-454298) DBG | Closing plugin on server side
I0211 02:12:45.752550   28132 main.go:141] libmachine: Successfully made call to close driver server
I0211 02:12:45.752562   28132 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454298 ssh pgrep buildkitd: exit status 1 (185.9744ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image build -t localhost/my-image:functional-454298 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-454298 image build -t localhost/my-image:functional-454298 testdata/build --alsologtostderr: (3.027533428s)
functional_test.go:337: (dbg) Stdout: out/minikube-linux-amd64 -p functional-454298 image build -t localhost/my-image:functional-454298 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> c3dc3dac2ca
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-454298
--> 7dc5fe6fd81
Successfully tagged localhost/my-image:functional-454298
7dc5fe6fd815c56023fec6aacede3471f38b52bafb4378d96e633ffa07add01b
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-454298 image build -t localhost/my-image:functional-454298 testdata/build --alsologtostderr:
I0211 02:12:45.985927   28185 out.go:345] Setting OutFile to fd 1 ...
I0211 02:12:45.986187   28185 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:12:45.986197   28185 out.go:358] Setting ErrFile to fd 2...
I0211 02:12:45.986201   28185 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0211 02:12:45.986393   28185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
I0211 02:12:45.986956   28185 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:12:45.987473   28185 config.go:182] Loaded profile config "functional-454298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
I0211 02:12:45.987827   28185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0211 02:12:45.987874   28185 main.go:141] libmachine: Launching plugin server for driver kvm2
I0211 02:12:46.002778   28185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44195
I0211 02:12:46.003242   28185 main.go:141] libmachine: () Calling .GetVersion
I0211 02:12:46.003754   28185 main.go:141] libmachine: Using API Version  1
I0211 02:12:46.003774   28185 main.go:141] libmachine: () Calling .SetConfigRaw
I0211 02:12:46.004126   28185 main.go:141] libmachine: () Calling .GetMachineName
I0211 02:12:46.004319   28185 main.go:141] libmachine: (functional-454298) Calling .GetState
I0211 02:12:46.005932   28185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0211 02:12:46.005965   28185 main.go:141] libmachine: Launching plugin server for driver kvm2
I0211 02:12:46.022034   28185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39817
I0211 02:12:46.022409   28185 main.go:141] libmachine: () Calling .GetVersion
I0211 02:12:46.022885   28185 main.go:141] libmachine: Using API Version  1
I0211 02:12:46.022906   28185 main.go:141] libmachine: () Calling .SetConfigRaw
I0211 02:12:46.023195   28185 main.go:141] libmachine: () Calling .GetMachineName
I0211 02:12:46.023380   28185 main.go:141] libmachine: (functional-454298) Calling .DriverName
I0211 02:12:46.023582   28185 ssh_runner.go:195] Run: systemctl --version
I0211 02:12:46.023603   28185 main.go:141] libmachine: (functional-454298) Calling .GetSSHHostname
I0211 02:12:46.026370   28185 main.go:141] libmachine: (functional-454298) DBG | domain functional-454298 has defined MAC address 52:54:00:ec:84:6c in network mk-functional-454298
I0211 02:12:46.026777   28185 main.go:141] libmachine: (functional-454298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:84:6c", ip: ""} in network mk-functional-454298: {Iface:virbr1 ExpiryTime:2025-02-11 03:09:55 +0000 UTC Type:0 Mac:52:54:00:ec:84:6c Iaid: IPaddr:192.168.50.88 Prefix:24 Hostname:functional-454298 Clientid:01:52:54:00:ec:84:6c}
I0211 02:12:46.026808   28185 main.go:141] libmachine: (functional-454298) DBG | domain functional-454298 has defined IP address 192.168.50.88 and MAC address 52:54:00:ec:84:6c in network mk-functional-454298
I0211 02:12:46.026928   28185 main.go:141] libmachine: (functional-454298) Calling .GetSSHPort
I0211 02:12:46.027119   28185 main.go:141] libmachine: (functional-454298) Calling .GetSSHKeyPath
I0211 02:12:46.027267   28185 main.go:141] libmachine: (functional-454298) Calling .GetSSHUsername
I0211 02:12:46.027410   28185 sshutil.go:53] new ssh client: &{IP:192.168.50.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/functional-454298/id_rsa Username:docker}
I0211 02:12:46.109178   28185 build_images.go:161] Building image from path: /tmp/build.3665902184.tar
I0211 02:12:46.109255   28185 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0211 02:12:46.118331   28185 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3665902184.tar
I0211 02:12:46.122079   28185 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3665902184.tar: stat -c "%s %y" /var/lib/minikube/build/build.3665902184.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3665902184.tar': No such file or directory
I0211 02:12:46.122105   28185 ssh_runner.go:362] scp /tmp/build.3665902184.tar --> /var/lib/minikube/build/build.3665902184.tar (3072 bytes)
I0211 02:12:46.148018   28185 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3665902184
I0211 02:12:46.156650   28185 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3665902184 -xf /var/lib/minikube/build/build.3665902184.tar
I0211 02:12:46.165296   28185 crio.go:315] Building image: /var/lib/minikube/build/build.3665902184
I0211 02:12:46.165357   28185 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-454298 /var/lib/minikube/build/build.3665902184 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0211 02:12:48.935026   28185 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-454298 /var/lib/minikube/build/build.3665902184 --cgroup-manager=cgroupfs: (2.76964682s)
I0211 02:12:48.935095   28185 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3665902184
I0211 02:12:48.957444   28185 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3665902184.tar
I0211 02:12:48.966821   28185 build_images.go:217] Built localhost/my-image:functional-454298 from /tmp/build.3665902184.tar
I0211 02:12:48.966851   28185 build_images.go:133] succeeded building to: functional-454298
I0211 02:12:48.966858   28185 build_images.go:134] failed building to: 
I0211 02:12:48.966904   28185 main.go:141] libmachine: Making call to close driver server
I0211 02:12:48.966921   28185 main.go:141] libmachine: (functional-454298) Calling .Close
I0211 02:12:48.967186   28185 main.go:141] libmachine: Successfully made call to close driver server
I0211 02:12:48.967205   28185 main.go:141] libmachine: Making call to close connection to plugin binary
I0211 02:12:48.967212   28185 main.go:141] libmachine: Making call to close driver server
I0211 02:12:48.967217   28185 main.go:141] libmachine: (functional-454298) DBG | Closing plugin on server side
I0211 02:12:48.967243   28185 main.go:141] libmachine: (functional-454298) Calling .Close
I0211 02:12:48.967513   28185 main.go:141] libmachine: Successfully made call to close driver server
I0211 02:12:48.967522   28185 main.go:141] libmachine: (functional-454298) DBG | Closing plugin on server side
I0211 02:12:48.967530   28185 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.649461762s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-454298
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image load --daemon kicbase/echo-server:functional-454298 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-linux-amd64 -p functional-454298 image load --daemon kicbase/echo-server:functional-454298 --alsologtostderr: (1.590286619s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image load --daemon kicbase/echo-server:functional-454298 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-454298
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image load --daemon kicbase/echo-server:functional-454298 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image save kicbase/echo-server:functional-454298 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image rm kicbase/echo-server:functional-454298 --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-454298 image rm kicbase/echo-server:functional-454298 --alsologtostderr: (2.065332673s)
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 service list -o json
functional_test.go:1511: Took "444.276165ms" to run "out/minikube-linux-amd64 -p functional-454298 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.50.88:32557
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-454298
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 image save --daemon kicbase/echo-server:functional-454298 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-454298
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.50.88:32557
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-454298 /tmp/TestFunctionalparallelMountCmdany-port346204873/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1739239956711108787" to /tmp/TestFunctionalparallelMountCmdany-port346204873/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1739239956711108787" to /tmp/TestFunctionalparallelMountCmdany-port346204873/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1739239956711108787" to /tmp/TestFunctionalparallelMountCmdany-port346204873/001/test-1739239956711108787
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454298 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (268.729414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0211 02:12:36.980225   19645 retry.go:31] will retry after 529.916566ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh -- ls -la /mount-9p
2025/02/11 02:12:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 11 02:12 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 11 02:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 11 02:12 test-1739239956711108787
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh cat /mount-9p/test-1739239956711108787
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-454298 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0837d3f0-13f4-4593-964f-6cc3938f1598] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0837d3f0-13f4-4593-964f-6cc3938f1598] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0837d3f0-13f4-4593-964f-6cc3938f1598] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.003820101s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-454298 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-454298 /tmp/TestFunctionalparallelMountCmdany-port346204873/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-454298 /tmp/TestFunctionalparallelMountCmdspecific-port693128838/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454298 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (293.504998ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0211 02:12:49.873592   19645 retry.go:31] will retry after 257.977275ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-454298 /tmp/TestFunctionalparallelMountCmdspecific-port693128838/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454298 ssh "sudo umount -f /mount-9p": exit status 1 (255.225343ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-454298 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-454298 /tmp/TestFunctionalparallelMountCmdspecific-port693128838/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-454298 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2352129326/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-454298 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2352129326/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-454298 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2352129326/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-454298 ssh "findmnt -T" /mount1: exit status 1 (330.066614ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0211 02:12:51.626059   19645 retry.go:31] will retry after 526.239331ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-454298 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-454298 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-454298 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2352129326/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-454298 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2352129326/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-454298 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2352129326/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-454298
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-454298
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-454298
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (188.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-672486 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0211 02:14:16.210843   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:14:43.921686   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-672486 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m7.792661256s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (188.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-672486 -- rollout status deployment/busybox: (3.75528016s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-7lmv2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-d9gl5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-t6b99 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-7lmv2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-d9gl5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-t6b99 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-7lmv2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-d9gl5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-t6b99 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-7lmv2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-7lmv2 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-d9gl5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-d9gl5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-t6b99 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-672486 -- exec busybox-58667487b6-t6b99 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-672486 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-672486 -v=7 --alsologtostderr: (54.757737185s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-672486 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp testdata/cp-test.txt ha-672486:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2956067349/001/cp-test_ha-672486.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486:/home/docker/cp-test.txt ha-672486-m02:/home/docker/cp-test_ha-672486_ha-672486-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m02 "sudo cat /home/docker/cp-test_ha-672486_ha-672486-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486:/home/docker/cp-test.txt ha-672486-m03:/home/docker/cp-test_ha-672486_ha-672486-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m03 "sudo cat /home/docker/cp-test_ha-672486_ha-672486-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486:/home/docker/cp-test.txt ha-672486-m04:/home/docker/cp-test_ha-672486_ha-672486-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m04 "sudo cat /home/docker/cp-test_ha-672486_ha-672486-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp testdata/cp-test.txt ha-672486-m02:/home/docker/cp-test.txt
E0211 02:17:23.755253   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:17:23.761616   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:17:23.772948   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:17:23.794315   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m02 "sudo cat /home/docker/cp-test.txt"
E0211 02:17:23.836270   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:17:23.917750   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2956067349/001/cp-test_ha-672486-m02.txt
E0211 02:17:24.079164   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m02 "sudo cat /home/docker/cp-test.txt"
E0211 02:17:24.401257   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486-m02:/home/docker/cp-test.txt ha-672486:/home/docker/cp-test_ha-672486-m02_ha-672486.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486 "sudo cat /home/docker/cp-test_ha-672486-m02_ha-672486.txt"
E0211 02:17:25.043518   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486-m02:/home/docker/cp-test.txt ha-672486-m03:/home/docker/cp-test_ha-672486-m02_ha-672486-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m03 "sudo cat /home/docker/cp-test_ha-672486-m02_ha-672486-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486-m02:/home/docker/cp-test.txt ha-672486-m04:/home/docker/cp-test_ha-672486-m02_ha-672486-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m02 "sudo cat /home/docker/cp-test.txt"
E0211 02:17:26.325152   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m04 "sudo cat /home/docker/cp-test_ha-672486-m02_ha-672486-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp testdata/cp-test.txt ha-672486-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2956067349/001/cp-test_ha-672486-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486-m03:/home/docker/cp-test.txt ha-672486:/home/docker/cp-test_ha-672486-m03_ha-672486.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486 "sudo cat /home/docker/cp-test_ha-672486-m03_ha-672486.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486-m03:/home/docker/cp-test.txt ha-672486-m02:/home/docker/cp-test_ha-672486-m03_ha-672486-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m02 "sudo cat /home/docker/cp-test_ha-672486-m03_ha-672486-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486-m03:/home/docker/cp-test.txt ha-672486-m04:/home/docker/cp-test_ha-672486-m03_ha-672486-m04.txt
E0211 02:17:28.887033   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m04 "sudo cat /home/docker/cp-test_ha-672486-m03_ha-672486-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp testdata/cp-test.txt ha-672486-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2956067349/001/cp-test_ha-672486-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486-m04:/home/docker/cp-test.txt ha-672486:/home/docker/cp-test_ha-672486-m04_ha-672486.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486 "sudo cat /home/docker/cp-test_ha-672486-m04_ha-672486.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486-m04:/home/docker/cp-test.txt ha-672486-m02:/home/docker/cp-test_ha-672486-m04_ha-672486-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m02 "sudo cat /home/docker/cp-test_ha-672486-m04_ha-672486-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 cp ha-672486-m04:/home/docker/cp-test.txt ha-672486-m03:/home/docker/cp-test_ha-672486-m04_ha-672486-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 ssh -n ha-672486-m03 "sudo cat /home/docker/cp-test_ha-672486-m04_ha-672486-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (91.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 node stop m02 -v=7 --alsologtostderr
E0211 02:17:34.008908   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:17:44.250268   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:18:04.732432   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:18:45.694251   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-672486 node stop m02 -v=7 --alsologtostderr: (1m30.962674336s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672486 status -v=7 --alsologtostderr: exit status 7 (618.034296ms)

                                                
                                                
-- stdout --
	ha-672486
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672486-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-672486-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-672486-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:19:03.480055   33782 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:19:03.480158   33782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:19:03.480166   33782 out.go:358] Setting ErrFile to fd 2...
	I0211 02:19:03.480177   33782 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:19:03.480339   33782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 02:19:03.480533   33782 out.go:352] Setting JSON to false
	I0211 02:19:03.480559   33782 mustload.go:65] Loading cluster: ha-672486
	I0211 02:19:03.480617   33782 notify.go:220] Checking for updates...
	I0211 02:19:03.480916   33782 config.go:182] Loaded profile config "ha-672486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:19:03.480932   33782 status.go:174] checking status of ha-672486 ...
	I0211 02:19:03.481309   33782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:19:03.481348   33782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:19:03.497134   33782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45307
	I0211 02:19:03.497527   33782 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:19:03.498002   33782 main.go:141] libmachine: Using API Version  1
	I0211 02:19:03.498023   33782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:19:03.498405   33782 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:19:03.498591   33782 main.go:141] libmachine: (ha-672486) Calling .GetState
	I0211 02:19:03.500351   33782 status.go:371] ha-672486 host status = "Running" (err=<nil>)
	I0211 02:19:03.500373   33782 host.go:66] Checking if "ha-672486" exists ...
	I0211 02:19:03.500661   33782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:19:03.500697   33782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:19:03.515025   33782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I0211 02:19:03.515366   33782 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:19:03.515844   33782 main.go:141] libmachine: Using API Version  1
	I0211 02:19:03.515874   33782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:19:03.516147   33782 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:19:03.516316   33782 main.go:141] libmachine: (ha-672486) Calling .GetIP
	I0211 02:19:03.519156   33782 main.go:141] libmachine: (ha-672486) DBG | domain ha-672486 has defined MAC address 52:54:00:b2:b1:eb in network mk-ha-672486
	I0211 02:19:03.519565   33782 main.go:141] libmachine: (ha-672486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b1:eb", ip: ""} in network mk-ha-672486: {Iface:virbr1 ExpiryTime:2025-02-11 03:13:22 +0000 UTC Type:0 Mac:52:54:00:b2:b1:eb Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-672486 Clientid:01:52:54:00:b2:b1:eb}
	I0211 02:19:03.519591   33782 main.go:141] libmachine: (ha-672486) DBG | domain ha-672486 has defined IP address 192.168.39.116 and MAC address 52:54:00:b2:b1:eb in network mk-ha-672486
	I0211 02:19:03.519728   33782 host.go:66] Checking if "ha-672486" exists ...
	I0211 02:19:03.519991   33782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:19:03.520026   33782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:19:03.534104   33782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38067
	I0211 02:19:03.534545   33782 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:19:03.535082   33782 main.go:141] libmachine: Using API Version  1
	I0211 02:19:03.535104   33782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:19:03.535389   33782 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:19:03.535540   33782 main.go:141] libmachine: (ha-672486) Calling .DriverName
	I0211 02:19:03.535693   33782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:19:03.535716   33782 main.go:141] libmachine: (ha-672486) Calling .GetSSHHostname
	I0211 02:19:03.538227   33782 main.go:141] libmachine: (ha-672486) DBG | domain ha-672486 has defined MAC address 52:54:00:b2:b1:eb in network mk-ha-672486
	I0211 02:19:03.538633   33782 main.go:141] libmachine: (ha-672486) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:b1:eb", ip: ""} in network mk-ha-672486: {Iface:virbr1 ExpiryTime:2025-02-11 03:13:22 +0000 UTC Type:0 Mac:52:54:00:b2:b1:eb Iaid: IPaddr:192.168.39.116 Prefix:24 Hostname:ha-672486 Clientid:01:52:54:00:b2:b1:eb}
	I0211 02:19:03.538667   33782 main.go:141] libmachine: (ha-672486) DBG | domain ha-672486 has defined IP address 192.168.39.116 and MAC address 52:54:00:b2:b1:eb in network mk-ha-672486
	I0211 02:19:03.538773   33782 main.go:141] libmachine: (ha-672486) Calling .GetSSHPort
	I0211 02:19:03.538937   33782 main.go:141] libmachine: (ha-672486) Calling .GetSSHKeyPath
	I0211 02:19:03.539081   33782 main.go:141] libmachine: (ha-672486) Calling .GetSSHUsername
	I0211 02:19:03.539206   33782 sshutil.go:53] new ssh client: &{IP:192.168.39.116 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/ha-672486/id_rsa Username:docker}
	I0211 02:19:03.624099   33782 ssh_runner.go:195] Run: systemctl --version
	I0211 02:19:03.630169   33782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:19:03.648512   33782 kubeconfig.go:125] found "ha-672486" server: "https://192.168.39.254:8443"
	I0211 02:19:03.648563   33782 api_server.go:166] Checking apiserver status ...
	I0211 02:19:03.648615   33782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 02:19:03.664435   33782 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	W0211 02:19:03.674693   33782 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0211 02:19:03.674743   33782 ssh_runner.go:195] Run: ls
	I0211 02:19:03.679198   33782 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0211 02:19:03.684306   33782 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0211 02:19:03.684331   33782 status.go:463] ha-672486 apiserver status = Running (err=<nil>)
	I0211 02:19:03.684341   33782 status.go:176] ha-672486 status: &{Name:ha-672486 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:19:03.684370   33782 status.go:174] checking status of ha-672486-m02 ...
	I0211 02:19:03.684767   33782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:19:03.684808   33782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:19:03.699209   33782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45671
	I0211 02:19:03.699591   33782 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:19:03.700022   33782 main.go:141] libmachine: Using API Version  1
	I0211 02:19:03.700040   33782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:19:03.700327   33782 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:19:03.700547   33782 main.go:141] libmachine: (ha-672486-m02) Calling .GetState
	I0211 02:19:03.702034   33782 status.go:371] ha-672486-m02 host status = "Stopped" (err=<nil>)
	I0211 02:19:03.702049   33782 status.go:384] host is not running, skipping remaining checks
	I0211 02:19:03.702055   33782 status.go:176] ha-672486-m02 status: &{Name:ha-672486-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:19:03.702075   33782 status.go:174] checking status of ha-672486-m03 ...
	I0211 02:19:03.702383   33782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:19:03.702430   33782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:19:03.717379   33782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36677
	I0211 02:19:03.717858   33782 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:19:03.718356   33782 main.go:141] libmachine: Using API Version  1
	I0211 02:19:03.718381   33782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:19:03.718694   33782 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:19:03.718909   33782 main.go:141] libmachine: (ha-672486-m03) Calling .GetState
	I0211 02:19:03.720212   33782 status.go:371] ha-672486-m03 host status = "Running" (err=<nil>)
	I0211 02:19:03.720231   33782 host.go:66] Checking if "ha-672486-m03" exists ...
	I0211 02:19:03.720507   33782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:19:03.720539   33782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:19:03.734319   33782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46793
	I0211 02:19:03.734722   33782 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:19:03.735182   33782 main.go:141] libmachine: Using API Version  1
	I0211 02:19:03.735203   33782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:19:03.735473   33782 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:19:03.735626   33782 main.go:141] libmachine: (ha-672486-m03) Calling .GetIP
	I0211 02:19:03.737874   33782 main.go:141] libmachine: (ha-672486-m03) DBG | domain ha-672486-m03 has defined MAC address 52:54:00:d3:71:00 in network mk-ha-672486
	I0211 02:19:03.738195   33782 main.go:141] libmachine: (ha-672486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:71:00", ip: ""} in network mk-ha-672486: {Iface:virbr1 ExpiryTime:2025-02-11 03:15:17 +0000 UTC Type:0 Mac:52:54:00:d3:71:00 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-672486-m03 Clientid:01:52:54:00:d3:71:00}
	I0211 02:19:03.738222   33782 main.go:141] libmachine: (ha-672486-m03) DBG | domain ha-672486-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:d3:71:00 in network mk-ha-672486
	I0211 02:19:03.738339   33782 host.go:66] Checking if "ha-672486-m03" exists ...
	I0211 02:19:03.738644   33782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:19:03.738678   33782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:19:03.752556   33782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35875
	I0211 02:19:03.752964   33782 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:19:03.753447   33782 main.go:141] libmachine: Using API Version  1
	I0211 02:19:03.753470   33782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:19:03.753742   33782 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:19:03.753915   33782 main.go:141] libmachine: (ha-672486-m03) Calling .DriverName
	I0211 02:19:03.754071   33782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:19:03.754093   33782 main.go:141] libmachine: (ha-672486-m03) Calling .GetSSHHostname
	I0211 02:19:03.756633   33782 main.go:141] libmachine: (ha-672486-m03) DBG | domain ha-672486-m03 has defined MAC address 52:54:00:d3:71:00 in network mk-ha-672486
	I0211 02:19:03.757184   33782 main.go:141] libmachine: (ha-672486-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:71:00", ip: ""} in network mk-ha-672486: {Iface:virbr1 ExpiryTime:2025-02-11 03:15:17 +0000 UTC Type:0 Mac:52:54:00:d3:71:00 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:ha-672486-m03 Clientid:01:52:54:00:d3:71:00}
	I0211 02:19:03.757218   33782 main.go:141] libmachine: (ha-672486-m03) DBG | domain ha-672486-m03 has defined IP address 192.168.39.54 and MAC address 52:54:00:d3:71:00 in network mk-ha-672486
	I0211 02:19:03.757315   33782 main.go:141] libmachine: (ha-672486-m03) Calling .GetSSHPort
	I0211 02:19:03.757487   33782 main.go:141] libmachine: (ha-672486-m03) Calling .GetSSHKeyPath
	I0211 02:19:03.757623   33782 main.go:141] libmachine: (ha-672486-m03) Calling .GetSSHUsername
	I0211 02:19:03.757737   33782 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/ha-672486-m03/id_rsa Username:docker}
	I0211 02:19:03.843573   33782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:19:03.858971   33782 kubeconfig.go:125] found "ha-672486" server: "https://192.168.39.254:8443"
	I0211 02:19:03.858996   33782 api_server.go:166] Checking apiserver status ...
	I0211 02:19:03.859025   33782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 02:19:03.872939   33782 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	W0211 02:19:03.882028   33782 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0211 02:19:03.882089   33782 ssh_runner.go:195] Run: ls
	I0211 02:19:03.886196   33782 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0211 02:19:03.890861   33782 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0211 02:19:03.890900   33782 status.go:463] ha-672486-m03 apiserver status = Running (err=<nil>)
	I0211 02:19:03.890921   33782 status.go:176] ha-672486-m03 status: &{Name:ha-672486-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:19:03.890946   33782 status.go:174] checking status of ha-672486-m04 ...
	I0211 02:19:03.891343   33782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:19:03.891386   33782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:19:03.906185   33782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33049
	I0211 02:19:03.906549   33782 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:19:03.907096   33782 main.go:141] libmachine: Using API Version  1
	I0211 02:19:03.907111   33782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:19:03.907359   33782 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:19:03.907568   33782 main.go:141] libmachine: (ha-672486-m04) Calling .GetState
	I0211 02:19:03.908918   33782 status.go:371] ha-672486-m04 host status = "Running" (err=<nil>)
	I0211 02:19:03.908933   33782 host.go:66] Checking if "ha-672486-m04" exists ...
	I0211 02:19:03.909287   33782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:19:03.909328   33782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:19:03.923838   33782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39487
	I0211 02:19:03.924212   33782 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:19:03.924705   33782 main.go:141] libmachine: Using API Version  1
	I0211 02:19:03.924727   33782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:19:03.925067   33782 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:19:03.925244   33782 main.go:141] libmachine: (ha-672486-m04) Calling .GetIP
	I0211 02:19:03.928233   33782 main.go:141] libmachine: (ha-672486-m04) DBG | domain ha-672486-m04 has defined MAC address 52:54:00:12:44:ce in network mk-ha-672486
	I0211 02:19:03.928647   33782 main.go:141] libmachine: (ha-672486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:44:ce", ip: ""} in network mk-ha-672486: {Iface:virbr1 ExpiryTime:2025-02-11 03:16:38 +0000 UTC Type:0 Mac:52:54:00:12:44:ce Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-672486-m04 Clientid:01:52:54:00:12:44:ce}
	I0211 02:19:03.928671   33782 main.go:141] libmachine: (ha-672486-m04) DBG | domain ha-672486-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:12:44:ce in network mk-ha-672486
	I0211 02:19:03.928809   33782 host.go:66] Checking if "ha-672486-m04" exists ...
	I0211 02:19:03.929080   33782 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:19:03.929111   33782 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:19:03.943355   33782 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36393
	I0211 02:19:03.943795   33782 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:19:03.944265   33782 main.go:141] libmachine: Using API Version  1
	I0211 02:19:03.944334   33782 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:19:03.944609   33782 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:19:03.944788   33782 main.go:141] libmachine: (ha-672486-m04) Calling .DriverName
	I0211 02:19:03.944967   33782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:19:03.944992   33782 main.go:141] libmachine: (ha-672486-m04) Calling .GetSSHHostname
	I0211 02:19:03.947624   33782 main.go:141] libmachine: (ha-672486-m04) DBG | domain ha-672486-m04 has defined MAC address 52:54:00:12:44:ce in network mk-ha-672486
	I0211 02:19:03.948090   33782 main.go:141] libmachine: (ha-672486-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:44:ce", ip: ""} in network mk-ha-672486: {Iface:virbr1 ExpiryTime:2025-02-11 03:16:38 +0000 UTC Type:0 Mac:52:54:00:12:44:ce Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:ha-672486-m04 Clientid:01:52:54:00:12:44:ce}
	I0211 02:19:03.948110   33782 main.go:141] libmachine: (ha-672486-m04) DBG | domain ha-672486-m04 has defined IP address 192.168.39.44 and MAC address 52:54:00:12:44:ce in network mk-ha-672486
	I0211 02:19:03.948268   33782 main.go:141] libmachine: (ha-672486-m04) Calling .GetSSHPort
	I0211 02:19:03.948414   33782 main.go:141] libmachine: (ha-672486-m04) Calling .GetSSHKeyPath
	I0211 02:19:03.948538   33782 main.go:141] libmachine: (ha-672486-m04) Calling .GetSSHUsername
	I0211 02:19:03.948659   33782 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/ha-672486-m04/id_rsa Username:docker}
	I0211 02:19:04.038739   33782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:19:04.053323   33782 status.go:176] ha-672486-m04 status: &{Name:ha-672486-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (91.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 node start m02 -v=7 --alsologtostderr
E0211 02:19:16.211108   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-672486 node start m02 -v=7 --alsologtostderr: (48.640101187s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (49.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (454.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-672486 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-672486 -v=7 --alsologtostderr
E0211 02:20:07.615868   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:22:23.754795   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:22:51.458019   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:24:16.211174   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-672486 -v=7 --alsologtostderr: (4m34.098842155s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-672486 --wait=true -v=7 --alsologtostderr
E0211 02:25:39.283127   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:27:23.755203   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-672486 --wait=true -v=7 --alsologtostderr: (3m0.624119869s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-672486
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (454.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-672486 node delete m03 -v=7 --alsologtostderr: (17.4542921s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (272.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 stop -v=7 --alsologtostderr
E0211 02:29:16.210497   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-672486 stop -v=7 --alsologtostderr: (4m32.550388054s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-672486 status -v=7 --alsologtostderr: exit status 7 (99.031807ms)

                                                
                                                
-- stdout --
	ha-672486
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-672486-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-672486-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:32:21.310434   38100 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:32:21.310559   38100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:32:21.310569   38100 out.go:358] Setting ErrFile to fd 2...
	I0211 02:32:21.310575   38100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:32:21.310777   38100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 02:32:21.310970   38100 out.go:352] Setting JSON to false
	I0211 02:32:21.311005   38100 mustload.go:65] Loading cluster: ha-672486
	I0211 02:32:21.311037   38100 notify.go:220] Checking for updates...
	I0211 02:32:21.311441   38100 config.go:182] Loaded profile config "ha-672486": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:32:21.311468   38100 status.go:174] checking status of ha-672486 ...
	I0211 02:32:21.311848   38100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:32:21.311895   38100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:32:21.330202   38100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42729
	I0211 02:32:21.330624   38100 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:32:21.331278   38100 main.go:141] libmachine: Using API Version  1
	I0211 02:32:21.331303   38100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:32:21.331710   38100 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:32:21.331916   38100 main.go:141] libmachine: (ha-672486) Calling .GetState
	I0211 02:32:21.333373   38100 status.go:371] ha-672486 host status = "Stopped" (err=<nil>)
	I0211 02:32:21.333391   38100 status.go:384] host is not running, skipping remaining checks
	I0211 02:32:21.333397   38100 status.go:176] ha-672486 status: &{Name:ha-672486 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:32:21.333417   38100 status.go:174] checking status of ha-672486-m02 ...
	I0211 02:32:21.333817   38100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:32:21.333867   38100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:32:21.347786   38100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38079
	I0211 02:32:21.348104   38100 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:32:21.348539   38100 main.go:141] libmachine: Using API Version  1
	I0211 02:32:21.348562   38100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:32:21.348811   38100 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:32:21.348968   38100 main.go:141] libmachine: (ha-672486-m02) Calling .GetState
	I0211 02:32:21.350504   38100 status.go:371] ha-672486-m02 host status = "Stopped" (err=<nil>)
	I0211 02:32:21.350518   38100 status.go:384] host is not running, skipping remaining checks
	I0211 02:32:21.350525   38100 status.go:176] ha-672486-m02 status: &{Name:ha-672486-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:32:21.350553   38100 status.go:174] checking status of ha-672486-m04 ...
	I0211 02:32:21.350938   38100 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:32:21.350981   38100 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:32:21.364645   38100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34205
	I0211 02:32:21.365005   38100 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:32:21.365421   38100 main.go:141] libmachine: Using API Version  1
	I0211 02:32:21.365443   38100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:32:21.365697   38100 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:32:21.365861   38100 main.go:141] libmachine: (ha-672486-m04) Calling .GetState
	I0211 02:32:21.367240   38100 status.go:371] ha-672486-m04 host status = "Stopped" (err=<nil>)
	I0211 02:32:21.367250   38100 status.go:384] host is not running, skipping remaining checks
	I0211 02:32:21.367255   38100 status.go:176] ha-672486-m04 status: &{Name:ha-672486-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (272.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (119.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-672486 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0211 02:32:23.754976   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:33:46.820388   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:34:16.210464   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-672486 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m58.844950368s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (119.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-672486 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-672486 --control-plane -v=7 --alsologtostderr: (1m14.727291733s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-672486 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (48.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-764118 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-764118 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (48.538926127s)
--- PASS: TestJSONOutput/start/Command (48.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-764118 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-764118 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-764118 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-764118 --output=json --user=testUser: (7.355896619s)
--- PASS: TestJSONOutput/stop/Command (7.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-057640 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-057640 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.407408ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"996841e3-537f-43aa-9c9c-c7f6475be34c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-057640] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7780b95-fc74-498a-bd6e-f4a99b4a1fe5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20400"}}
	{"specversion":"1.0","id":"231ec096-5f18-4e4a-8435-1f2a595805a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8f574c5d-3c02-496c-a32d-618433aa75e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig"}}
	{"specversion":"1.0","id":"e50905d4-54b5-4e02-825a-99e900a6ee8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube"}}
	{"specversion":"1.0","id":"249404db-a537-401f-b047-6c15002b4c37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"53f51247-71a0-4f46-b93c-772696c747d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"14d6b0a8-e7ae-4476-b754-57e4f601e9b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-057640" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-057640
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (87.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-935727 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-935727 --driver=kvm2  --container-runtime=crio: (43.021195688s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-961707 --driver=kvm2  --container-runtime=crio
E0211 02:37:23.755400   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-961707 --driver=kvm2  --container-runtime=crio: (42.034091452s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-935727
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-961707
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-961707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-961707
helpers_test.go:175: Cleaning up "first-935727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-935727
--- PASS: TestMinikubeProfile (87.65s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-601691 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-601691 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.073376299s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-601691 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-601691 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-620296 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-620296 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.855493557s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-620296 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-620296 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-601691 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-620296 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-620296 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-620296
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-620296: (1.548691557s)
--- PASS: TestMountStart/serial/Stop (1.55s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.1s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-620296
E0211 02:39:16.211194   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-620296: (23.104248516s)
--- PASS: TestMountStart/serial/RestartStopped (24.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-620296 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-620296 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-065377 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-065377 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m55.750164685s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-065377 -- rollout status deployment/busybox: (3.635695006s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- exec busybox-58667487b6-4m9n6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- exec busybox-58667487b6-jbcvg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- exec busybox-58667487b6-4m9n6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- exec busybox-58667487b6-jbcvg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- exec busybox-58667487b6-4m9n6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- exec busybox-58667487b6-jbcvg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- exec busybox-58667487b6-4m9n6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- exec busybox-58667487b6-4m9n6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- exec busybox-58667487b6-jbcvg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-065377 -- exec busybox-58667487b6-jbcvg -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-065377 -v 3 --alsologtostderr
E0211 02:42:19.285403   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:42:23.755453   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-065377 -v 3 --alsologtostderr: (50.333011996s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.89s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-065377 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 cp testdata/cp-test.txt multinode-065377:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 cp multinode-065377:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile322392259/001/cp-test_multinode-065377.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 cp multinode-065377:/home/docker/cp-test.txt multinode-065377-m02:/home/docker/cp-test_multinode-065377_multinode-065377-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377-m02 "sudo cat /home/docker/cp-test_multinode-065377_multinode-065377-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 cp multinode-065377:/home/docker/cp-test.txt multinode-065377-m03:/home/docker/cp-test_multinode-065377_multinode-065377-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377-m03 "sudo cat /home/docker/cp-test_multinode-065377_multinode-065377-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 cp testdata/cp-test.txt multinode-065377-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 cp multinode-065377-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile322392259/001/cp-test_multinode-065377-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 cp multinode-065377-m02:/home/docker/cp-test.txt multinode-065377:/home/docker/cp-test_multinode-065377-m02_multinode-065377.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377 "sudo cat /home/docker/cp-test_multinode-065377-m02_multinode-065377.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 cp multinode-065377-m02:/home/docker/cp-test.txt multinode-065377-m03:/home/docker/cp-test_multinode-065377-m02_multinode-065377-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377-m03 "sudo cat /home/docker/cp-test_multinode-065377-m02_multinode-065377-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 cp testdata/cp-test.txt multinode-065377-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 cp multinode-065377-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile322392259/001/cp-test_multinode-065377-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 cp multinode-065377-m03:/home/docker/cp-test.txt multinode-065377:/home/docker/cp-test_multinode-065377-m03_multinode-065377.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377 "sudo cat /home/docker/cp-test_multinode-065377-m03_multinode-065377.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 cp multinode-065377-m03:/home/docker/cp-test.txt multinode-065377-m02:/home/docker/cp-test_multinode-065377-m03_multinode-065377-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 ssh -n multinode-065377-m02 "sudo cat /home/docker/cp-test_multinode-065377-m03_multinode-065377-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-065377 node stop m03: (1.412215521s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-065377 status: exit status 7 (405.305197ms)

                                                
                                                
-- stdout --
	multinode-065377
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-065377-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-065377-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-065377 status --alsologtostderr: exit status 7 (410.90375ms)

                                                
                                                
-- stdout --
	multinode-065377
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-065377-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-065377-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:42:35.788136   45809 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:42:35.788233   45809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:42:35.788241   45809 out.go:358] Setting ErrFile to fd 2...
	I0211 02:42:35.788245   45809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:42:35.788420   45809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 02:42:35.788580   45809 out.go:352] Setting JSON to false
	I0211 02:42:35.788607   45809 mustload.go:65] Loading cluster: multinode-065377
	I0211 02:42:35.788718   45809 notify.go:220] Checking for updates...
	I0211 02:42:35.788994   45809 config.go:182] Loaded profile config "multinode-065377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:42:35.789012   45809 status.go:174] checking status of multinode-065377 ...
	I0211 02:42:35.789385   45809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:42:35.789417   45809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:42:35.805795   45809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46355
	I0211 02:42:35.806336   45809 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:42:35.806953   45809 main.go:141] libmachine: Using API Version  1
	I0211 02:42:35.806985   45809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:42:35.807475   45809 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:42:35.807718   45809 main.go:141] libmachine: (multinode-065377) Calling .GetState
	I0211 02:42:35.809469   45809 status.go:371] multinode-065377 host status = "Running" (err=<nil>)
	I0211 02:42:35.809488   45809 host.go:66] Checking if "multinode-065377" exists ...
	I0211 02:42:35.809871   45809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:42:35.809920   45809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:42:35.825223   45809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37171
	I0211 02:42:35.825643   45809 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:42:35.826077   45809 main.go:141] libmachine: Using API Version  1
	I0211 02:42:35.826098   45809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:42:35.826446   45809 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:42:35.826653   45809 main.go:141] libmachine: (multinode-065377) Calling .GetIP
	I0211 02:42:35.829249   45809 main.go:141] libmachine: (multinode-065377) DBG | domain multinode-065377 has defined MAC address 52:54:00:e8:e7:aa in network mk-multinode-065377
	I0211 02:42:35.829668   45809 main.go:141] libmachine: (multinode-065377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e7:aa", ip: ""} in network mk-multinode-065377: {Iface:virbr1 ExpiryTime:2025-02-11 03:39:47 +0000 UTC Type:0 Mac:52:54:00:e8:e7:aa Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-065377 Clientid:01:52:54:00:e8:e7:aa}
	I0211 02:42:35.829698   45809 main.go:141] libmachine: (multinode-065377) DBG | domain multinode-065377 has defined IP address 192.168.39.159 and MAC address 52:54:00:e8:e7:aa in network mk-multinode-065377
	I0211 02:42:35.829795   45809 host.go:66] Checking if "multinode-065377" exists ...
	I0211 02:42:35.830078   45809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:42:35.830127   45809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:42:35.845141   45809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36043
	I0211 02:42:35.845481   45809 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:42:35.845993   45809 main.go:141] libmachine: Using API Version  1
	I0211 02:42:35.846019   45809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:42:35.846285   45809 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:42:35.846544   45809 main.go:141] libmachine: (multinode-065377) Calling .DriverName
	I0211 02:42:35.846741   45809 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:42:35.846765   45809 main.go:141] libmachine: (multinode-065377) Calling .GetSSHHostname
	I0211 02:42:35.849315   45809 main.go:141] libmachine: (multinode-065377) DBG | domain multinode-065377 has defined MAC address 52:54:00:e8:e7:aa in network mk-multinode-065377
	I0211 02:42:35.849709   45809 main.go:141] libmachine: (multinode-065377) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:e7:aa", ip: ""} in network mk-multinode-065377: {Iface:virbr1 ExpiryTime:2025-02-11 03:39:47 +0000 UTC Type:0 Mac:52:54:00:e8:e7:aa Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-065377 Clientid:01:52:54:00:e8:e7:aa}
	I0211 02:42:35.849734   45809 main.go:141] libmachine: (multinode-065377) DBG | domain multinode-065377 has defined IP address 192.168.39.159 and MAC address 52:54:00:e8:e7:aa in network mk-multinode-065377
	I0211 02:42:35.849845   45809 main.go:141] libmachine: (multinode-065377) Calling .GetSSHPort
	I0211 02:42:35.850019   45809 main.go:141] libmachine: (multinode-065377) Calling .GetSSHKeyPath
	I0211 02:42:35.850169   45809 main.go:141] libmachine: (multinode-065377) Calling .GetSSHUsername
	I0211 02:42:35.850294   45809 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/multinode-065377/id_rsa Username:docker}
	I0211 02:42:35.929431   45809 ssh_runner.go:195] Run: systemctl --version
	I0211 02:42:35.935287   45809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:42:35.949451   45809 kubeconfig.go:125] found "multinode-065377" server: "https://192.168.39.159:8443"
	I0211 02:42:35.949493   45809 api_server.go:166] Checking apiserver status ...
	I0211 02:42:35.949533   45809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0211 02:42:35.962925   45809 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1106/cgroup
	W0211 02:42:35.972195   45809 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1106/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0211 02:42:35.972251   45809 ssh_runner.go:195] Run: ls
	I0211 02:42:35.976216   45809 api_server.go:253] Checking apiserver healthz at https://192.168.39.159:8443/healthz ...
	I0211 02:42:35.980579   45809 api_server.go:279] https://192.168.39.159:8443/healthz returned 200:
	ok
	I0211 02:42:35.980604   45809 status.go:463] multinode-065377 apiserver status = Running (err=<nil>)
	I0211 02:42:35.980617   45809 status.go:176] multinode-065377 status: &{Name:multinode-065377 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:42:35.980647   45809 status.go:174] checking status of multinode-065377-m02 ...
	I0211 02:42:35.980945   45809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:42:35.980992   45809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:42:35.997043   45809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36475
	I0211 02:42:35.997490   45809 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:42:35.998060   45809 main.go:141] libmachine: Using API Version  1
	I0211 02:42:35.998081   45809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:42:35.998410   45809 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:42:35.998571   45809 main.go:141] libmachine: (multinode-065377-m02) Calling .GetState
	I0211 02:42:36.000102   45809 status.go:371] multinode-065377-m02 host status = "Running" (err=<nil>)
	I0211 02:42:36.000116   45809 host.go:66] Checking if "multinode-065377-m02" exists ...
	I0211 02:42:36.000412   45809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:42:36.000478   45809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:42:36.014773   45809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46257
	I0211 02:42:36.015175   45809 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:42:36.015584   45809 main.go:141] libmachine: Using API Version  1
	I0211 02:42:36.015605   45809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:42:36.015889   45809 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:42:36.016056   45809 main.go:141] libmachine: (multinode-065377-m02) Calling .GetIP
	I0211 02:42:36.018782   45809 main.go:141] libmachine: (multinode-065377-m02) DBG | domain multinode-065377-m02 has defined MAC address 52:54:00:d6:5a:38 in network mk-multinode-065377
	I0211 02:42:36.019214   45809 main.go:141] libmachine: (multinode-065377-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5a:38", ip: ""} in network mk-multinode-065377: {Iface:virbr1 ExpiryTime:2025-02-11 03:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:5a:38 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-065377-m02 Clientid:01:52:54:00:d6:5a:38}
	I0211 02:42:36.019248   45809 main.go:141] libmachine: (multinode-065377-m02) DBG | domain multinode-065377-m02 has defined IP address 192.168.39.120 and MAC address 52:54:00:d6:5a:38 in network mk-multinode-065377
	I0211 02:42:36.019353   45809 host.go:66] Checking if "multinode-065377-m02" exists ...
	I0211 02:42:36.019687   45809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:42:36.019737   45809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:42:36.034089   45809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
	I0211 02:42:36.034492   45809 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:42:36.034932   45809 main.go:141] libmachine: Using API Version  1
	I0211 02:42:36.034952   45809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:42:36.035231   45809 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:42:36.035396   45809 main.go:141] libmachine: (multinode-065377-m02) Calling .DriverName
	I0211 02:42:36.035543   45809 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0211 02:42:36.035564   45809 main.go:141] libmachine: (multinode-065377-m02) Calling .GetSSHHostname
	I0211 02:42:36.038044   45809 main.go:141] libmachine: (multinode-065377-m02) DBG | domain multinode-065377-m02 has defined MAC address 52:54:00:d6:5a:38 in network mk-multinode-065377
	I0211 02:42:36.038436   45809 main.go:141] libmachine: (multinode-065377-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:5a:38", ip: ""} in network mk-multinode-065377: {Iface:virbr1 ExpiryTime:2025-02-11 03:40:51 +0000 UTC Type:0 Mac:52:54:00:d6:5a:38 Iaid: IPaddr:192.168.39.120 Prefix:24 Hostname:multinode-065377-m02 Clientid:01:52:54:00:d6:5a:38}
	I0211 02:42:36.038449   45809 main.go:141] libmachine: (multinode-065377-m02) DBG | domain multinode-065377-m02 has defined IP address 192.168.39.120 and MAC address 52:54:00:d6:5a:38 in network mk-multinode-065377
	I0211 02:42:36.038589   45809 main.go:141] libmachine: (multinode-065377-m02) Calling .GetSSHPort
	I0211 02:42:36.038736   45809 main.go:141] libmachine: (multinode-065377-m02) Calling .GetSSHKeyPath
	I0211 02:42:36.038901   45809 main.go:141] libmachine: (multinode-065377-m02) Calling .GetSSHUsername
	I0211 02:42:36.039016   45809 sshutil.go:53] new ssh client: &{IP:192.168.39.120 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20400-12456/.minikube/machines/multinode-065377-m02/id_rsa Username:docker}
	I0211 02:42:36.121517   45809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0211 02:42:36.134277   45809 status.go:176] multinode-065377-m02 status: &{Name:multinode-065377-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:42:36.134312   45809 status.go:174] checking status of multinode-065377-m03 ...
	I0211 02:42:36.134648   45809 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:42:36.134687   45809 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:42:36.149860   45809 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40077
	I0211 02:42:36.150318   45809 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:42:36.150784   45809 main.go:141] libmachine: Using API Version  1
	I0211 02:42:36.150808   45809 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:42:36.151148   45809 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:42:36.151331   45809 main.go:141] libmachine: (multinode-065377-m03) Calling .GetState
	I0211 02:42:36.152786   45809 status.go:371] multinode-065377-m03 host status = "Stopped" (err=<nil>)
	I0211 02:42:36.152800   45809 status.go:384] host is not running, skipping remaining checks
	I0211 02:42:36.152805   45809 status.go:176] multinode-065377-m03 status: &{Name:multinode-065377-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-065377 node start m03 -v=7 --alsologtostderr: (38.244321527s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (340.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-065377
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-065377
E0211 02:44:16.211946   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-065377: (3m3.276749912s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-065377 --wait=true -v=8 --alsologtostderr
E0211 02:47:23.754633   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-065377 --wait=true -v=8 --alsologtostderr: (2m37.110086775s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-065377
--- PASS: TestMultiNode/serial/RestartKeepsNodes (340.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-065377 node delete m03: (2.073199132s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (181.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 stop
E0211 02:49:16.212134   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 02:50:26.821997   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-065377 stop: (3m1.671871246s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-065377 status: exit status 7 (81.280709ms)

                                                
                                                
-- stdout --
	multinode-065377
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-065377-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-065377 status --alsologtostderr: exit status 7 (79.571414ms)

                                                
                                                
-- stdout --
	multinode-065377
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-065377-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 02:51:59.888869   48844 out.go:345] Setting OutFile to fd 1 ...
	I0211 02:51:59.889115   48844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:51:59.889124   48844 out.go:358] Setting ErrFile to fd 2...
	I0211 02:51:59.889128   48844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 02:51:59.889284   48844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 02:51:59.889435   48844 out.go:352] Setting JSON to false
	I0211 02:51:59.889464   48844 mustload.go:65] Loading cluster: multinode-065377
	I0211 02:51:59.889490   48844 notify.go:220] Checking for updates...
	I0211 02:51:59.889981   48844 config.go:182] Loaded profile config "multinode-065377": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 02:51:59.890002   48844 status.go:174] checking status of multinode-065377 ...
	I0211 02:51:59.890497   48844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:51:59.890552   48844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:51:59.904989   48844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38717
	I0211 02:51:59.905459   48844 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:51:59.906082   48844 main.go:141] libmachine: Using API Version  1
	I0211 02:51:59.906115   48844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:51:59.906485   48844 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:51:59.906658   48844 main.go:141] libmachine: (multinode-065377) Calling .GetState
	I0211 02:51:59.908158   48844 status.go:371] multinode-065377 host status = "Stopped" (err=<nil>)
	I0211 02:51:59.908173   48844 status.go:384] host is not running, skipping remaining checks
	I0211 02:51:59.908180   48844 status.go:176] multinode-065377 status: &{Name:multinode-065377 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0211 02:51:59.908221   48844 status.go:174] checking status of multinode-065377-m02 ...
	I0211 02:51:59.908503   48844 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0211 02:51:59.908535   48844 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0211 02:51:59.922418   48844 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35287
	I0211 02:51:59.922751   48844 main.go:141] libmachine: () Calling .GetVersion
	I0211 02:51:59.923178   48844 main.go:141] libmachine: Using API Version  1
	I0211 02:51:59.923196   48844 main.go:141] libmachine: () Calling .SetConfigRaw
	I0211 02:51:59.923472   48844 main.go:141] libmachine: () Calling .GetMachineName
	I0211 02:51:59.923637   48844 main.go:141] libmachine: (multinode-065377-m02) Calling .GetState
	I0211 02:51:59.925042   48844 status.go:371] multinode-065377-m02 host status = "Stopped" (err=<nil>)
	I0211 02:51:59.925057   48844 status.go:384] host is not running, skipping remaining checks
	I0211 02:51:59.925064   48844 status.go:176] multinode-065377-m02 status: &{Name:multinode-065377-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (181.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (115.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-065377 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0211 02:52:23.755050   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-065377 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m54.613843334s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-065377 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (115.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-065377
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-065377-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-065377-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.456482ms)

                                                
                                                
-- stdout --
	* [multinode-065377-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-065377-m02' is duplicated with machine name 'multinode-065377-m02' in profile 'multinode-065377'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-065377-m03 --driver=kvm2  --container-runtime=crio
E0211 02:54:16.211707   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-065377-m03 --driver=kvm2  --container-runtime=crio: (41.926659131s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-065377
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-065377: exit status 80 (198.915593ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-065377 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-065377-m03 already exists in multinode-065377-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-065377-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-065377-m03: (10.767693366s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (53.00s)

                                                
                                    
x
+
TestScheduledStopUnix (114.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-432472 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-432472 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.954157041s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-432472 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-432472 -n scheduled-stop-432472
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-432472 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0211 03:00:23.006600   19645 retry.go:31] will retry after 75.29µs: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.007766   19645 retry.go:31] will retry after 161.958µs: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.008917   19645 retry.go:31] will retry after 273.523µs: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.010061   19645 retry.go:31] will retry after 198.161µs: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.011179   19645 retry.go:31] will retry after 680.528µs: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.012288   19645 retry.go:31] will retry after 451.083µs: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.013402   19645 retry.go:31] will retry after 1.139752ms: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.015579   19645 retry.go:31] will retry after 1.210372ms: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.017773   19645 retry.go:31] will retry after 3.555097ms: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.021986   19645 retry.go:31] will retry after 5.192817ms: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.028195   19645 retry.go:31] will retry after 3.382688ms: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.032399   19645 retry.go:31] will retry after 6.28906ms: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.039655   19645 retry.go:31] will retry after 14.098613ms: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.053863   19645 retry.go:31] will retry after 13.572851ms: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
I0211 03:00:23.068122   19645 retry.go:31] will retry after 31.244687ms: open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/scheduled-stop-432472/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-432472 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-432472 -n scheduled-stop-432472
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-432472
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-432472 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-432472
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-432472: exit status 7 (62.676569ms)

                                                
                                                
-- stdout --
	scheduled-stop-432472
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-432472 -n scheduled-stop-432472
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-432472 -n scheduled-stop-432472: exit status 7 (61.720589ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-432472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-432472
--- PASS: TestScheduledStopUnix (114.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (228.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2884084578 start -p running-upgrade-378121 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2884084578 start -p running-upgrade-378121 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.349882203s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-378121 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-378121 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m40.714406448s)
helpers_test.go:175: Cleaning up "running-upgrade-378121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-378121
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-378121: (1.17979026s)
--- PASS: TestRunningBinaryUpgrade (228.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-369064 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-369064 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (79.025308ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-369064] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-369064 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-369064 --driver=kvm2  --container-runtime=crio: (1m33.821601919s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-369064 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (70.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-369064 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-369064 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m8.053663538s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-369064 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-369064 status -o json: exit status 2 (247.032308ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-369064","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-369064
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-369064: (1.705398634s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (70.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (49.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-369064 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-369064 --no-kubernetes --driver=kvm2  --container-runtime=crio: (49.464411063s)
--- PASS: TestNoKubernetes/serial/Start (49.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-369064 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-369064 "sudo systemctl is-active --quiet service kubelet": exit status 1 (234.477954ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-649359 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-649359 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (122.28458ms)

                                                
                                                
-- stdout --
	* [false-649359] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20400
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0211 03:05:08.045229   56587 out.go:345] Setting OutFile to fd 1 ...
	I0211 03:05:08.045385   56587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:05:08.045398   56587 out.go:358] Setting ErrFile to fd 2...
	I0211 03:05:08.045405   56587 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0211 03:05:08.045719   56587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20400-12456/.minikube/bin
	I0211 03:05:08.046531   56587 out.go:352] Setting JSON to false
	I0211 03:05:08.047664   56587 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6459,"bootTime":1739236649,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0211 03:05:08.047763   56587 start.go:139] virtualization: kvm guest
	I0211 03:05:08.049998   56587 out.go:177] * [false-649359] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0211 03:05:08.051766   56587 out.go:177]   - MINIKUBE_LOCATION=20400
	I0211 03:05:08.051798   56587 notify.go:220] Checking for updates...
	I0211 03:05:08.054558   56587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0211 03:05:08.055773   56587 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20400-12456/kubeconfig
	I0211 03:05:08.056906   56587 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20400-12456/.minikube
	I0211 03:05:08.057989   56587 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0211 03:05:08.059046   56587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0211 03:05:08.061252   56587 config.go:182] Loaded profile config "NoKubernetes-369064": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0211 03:05:08.061390   56587 config.go:182] Loaded profile config "cert-expiration-411526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
	I0211 03:05:08.061571   56587 config.go:182] Loaded profile config "running-upgrade-378121": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0211 03:05:08.061708   56587 driver.go:394] Setting default libvirt URI to qemu:///system
	I0211 03:05:08.102391   56587 out.go:177] * Using the kvm2 driver based on user configuration
	I0211 03:05:08.103523   56587 start.go:297] selected driver: kvm2
	I0211 03:05:08.103539   56587 start.go:901] validating driver "kvm2" against <nil>
	I0211 03:05:08.103551   56587 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0211 03:05:08.105708   56587 out.go:201] 
	W0211 03:05:08.106822   56587 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0211 03:05:08.107916   56587 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-649359 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-649359

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-649359

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-649359

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-649359

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-649359

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-649359

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-649359

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-649359

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-649359

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-649359

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-649359

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-649359" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-649359" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 03:04:23 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.237:8443
name: cert-expiration-411526
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 03:04:31 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.91:8443
name: running-upgrade-378121
contexts:
- context:
cluster: cert-expiration-411526
extensions:
- extension:
last-update: Tue, 11 Feb 2025 03:04:23 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-411526
name: cert-expiration-411526
- context:
cluster: running-upgrade-378121
user: running-upgrade-378121
name: running-upgrade-378121
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-411526
user:
client-certificate: /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/cert-expiration-411526/client.crt
client-key: /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/cert-expiration-411526/client.key
- name: running-upgrade-378121
user:
client-certificate: /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/running-upgrade-378121/client.crt
client-key: /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/running-upgrade-378121/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-649359

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-649359"

                                                
                                                
----------------------- debugLogs end: false-649359 [took: 2.977150318s] --------------------------------
helpers_test.go:175: Cleaning up "false-649359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-649359
--- PASS: TestNetworkPlugins/group/false (3.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-369064
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-369064: (1.304613037s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (24.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-369064 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-369064 --driver=kvm2  --container-runtime=crio: (24.290940051s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (24.29s)

                                                
                                    
x
+
TestPause/serial/Start (92.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-224871 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-224871 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m32.949877288s)
--- PASS: TestPause/serial/Start (92.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-369064 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-369064 "sudo systemctl is-active --quiet service kubelet": exit status 1 (192.712024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (141.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1892167637 start -p stopped-upgrade-285044 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1892167637 start -p stopped-upgrade-285044 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m37.74769688s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1892167637 -p stopped-upgrade-285044 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1892167637 -p stopped-upgrade-285044 stop: (2.145703432s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-285044 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0211 03:07:23.754657   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-285044 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.117042916s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (141.01s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.91s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-224871 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E0211 03:07:06.823836   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-224871 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.880316953s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.91s)

                                                
                                    
x
+
TestPause/serial/Pause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-224871 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.73s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-224871 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-224871 --output=json --layout=cluster: exit status 2 (253.532378ms)

                                                
                                                
-- stdout --
	{"Name":"pause-224871","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-224871","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-224871 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (2.07s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-224871 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-224871 --alsologtostderr -v=5: (2.067433387s)
--- PASS: TestPause/serial/PauseAgain (2.07s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.37s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-224871 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-224871 --alsologtostderr -v=5: (1.367140366s)
--- PASS: TestPause/serial/DeletePaused (1.37s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.66s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-285044
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-214316 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
E0211 03:09:16.210242   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-214316 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (1m20.741529583s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-214316 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5207db21-4a58-45e3-8a3f-2f9c20e9717c] Pending
helpers_test.go:344: "busybox" [5207db21-4a58-45e3-8a3f-2f9c20e9717c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5207db21-4a58-45e3-8a3f-2f9c20e9717c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.006886651s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-214316 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-214316 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-214316 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (90.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-214316 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-214316 --alsologtostderr -v=3: (1m30.979475838s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (90.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-214316 -n no-preload-214316
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-214316 -n no-preload-214316: exit status 7 (62.463195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-214316 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (310.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-214316 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-214316 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m10.077360071s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-214316 -n no-preload-214316
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (310.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (56.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-443106 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-443106 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (56.936371564s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (56.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-443106 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5e6d0d45-8aa2-4769-aace-3570db409e0c] Pending
helpers_test.go:344: "busybox" [5e6d0d45-8aa2-4769-aace-3570db409e0c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5e6d0d45-8aa2-4769-aace-3570db409e0c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003835273s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-443106 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-443106 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-443106 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-443106 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-443106 --alsologtostderr -v=3: (1m31.207123747s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-697681 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-697681 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (56.686120761s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-244815 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-244815 --alsologtostderr -v=3: (2.285031084s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-244815 -n old-k8s-version-244815: exit status 7 (62.617596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-244815 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-443106 -n embed-certs-443106
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-443106 -n embed-certs-443106: exit status 7 (69.807202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-443106 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (336.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-443106 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-443106 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (5m35.891379213s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-443106 -n embed-certs-443106
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (336.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-697681 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f08fc18d-8240-4404-814a-6655bbaceacd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f08fc18d-8240-4404-814a-6655bbaceacd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.002948964s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-697681 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-697681 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-697681 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-697681 --alsologtostderr -v=3
E0211 03:15:39.288078   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-697681 --alsologtostderr -v=3: (1m31.014624527s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-99k6l" [7a0afef9-2a1d-4e33-aa44-984af620a211] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004815356s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-697681 -n default-k8s-diff-port-697681
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-697681 -n default-k8s-diff-port-697681: exit status 7 (66.705712ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-697681 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (368.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-697681 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-697681 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (6m8.188771098s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-697681 -n default-k8s-diff-port-697681
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (368.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-99k6l" [7a0afef9-2a1d-4e33-aa44-984af620a211] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005581665s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-214316 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-214316 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-214316 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-214316 -n no-preload-214316
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-214316 -n no-preload-214316: exit status 2 (267.240882ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-214316 -n no-preload-214316
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-214316 -n no-preload-214316: exit status 2 (250.670197ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-214316 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-214316 -n no-preload-214316
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-214316 -n no-preload-214316
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (52.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-889715 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-889715 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (52.690511742s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (52.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-889715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-889715 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.048756407s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-889715 --alsologtostderr -v=3
E0211 03:17:23.755115   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-889715 --alsologtostderr -v=3: (10.412692635s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-889715 -n newest-cni-889715
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-889715 -n newest-cni-889715: exit status 7 (68.974287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-889715 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-889715 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-889715 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.1: (36.383230424s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-889715 -n newest-cni-889715
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-889715 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-889715 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-889715 -n newest-cni-889715
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-889715 -n newest-cni-889715: exit status 2 (234.654429ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-889715 -n newest-cni-889715
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-889715 -n newest-cni-889715: exit status 2 (226.484542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-889715 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-889715 -n newest-cni-889715
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-889715 -n newest-cni-889715
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (50.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (50.371722742s)
--- PASS: TestNetworkPlugins/group/auto/Start (50.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-649359 "pgrep -a kubelet"
I0211 03:19:02.364596   19645 config.go:182] Loaded profile config "auto-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-649359 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-b7r6g" [4aa4f881-a38c-4e90-af6b-6789488d7618] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-b7r6g" [4aa4f881-a38c-4e90-af6b-6789488d7618] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003483106s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (25.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-649359 exec deployment/netcat -- nslookup kubernetes.default
E0211 03:19:16.210653   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/addons-046133/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:19:20.064958   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:19:20.071316   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:19:20.082690   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:19:20.104089   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:19:20.145600   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:19:20.227045   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:19:20.388565   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:19:20.710270   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:19:21.351729   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:19:22.633484   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:19:25.195235   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-649359 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.123992714s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0211 03:19:27.718436   19645 retry.go:31] will retry after 506.072384ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context auto-649359 exec deployment/netcat -- nslookup kubernetes.default
E0211 03:19:30.317456   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context auto-649359 exec deployment/netcat -- nslookup kubernetes.default: (10.131338977s)
--- PASS: TestNetworkPlugins/group/auto/DNS (25.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-spp7v" [4dac5d02-ba9d-4a57-adf0-ed11648d46f9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-spp7v" [4dac5d02-ba9d-4a57-adf0-ed11648d46f9] Running
E0211 03:20:01.041051   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.003323309s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m4.95942756s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-spp7v" [4dac5d02-ba9d-4a57-adf0-ed11648d46f9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004124051s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-443106 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-443106 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-443106 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-443106 -n embed-certs-443106
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-443106 -n embed-certs-443106: exit status 2 (226.64458ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-443106 -n embed-certs-443106
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-443106 -n embed-certs-443106: exit status 2 (231.998367ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-443106 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-443106 -n embed-certs-443106
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-443106 -n embed-certs-443106
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0211 03:20:42.002347   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/no-preload-214316/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m23.554673763s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-85nmn" [fa7ae1f5-e5be-4c32-a35d-c64912ea5203] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004697794s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-649359 "pgrep -a kubelet"
I0211 03:21:05.084916   19645 config.go:182] Loaded profile config "kindnet-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-649359 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-fswdg" [7afccfff-1343-48a1-978e-3d2d0c675c67] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-fswdg" [7afccfff-1343-48a1-978e-3d2d0c675c67] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.002751308s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-649359 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.819151186s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-h4gfc" [0d2d8b70-195a-48cc-a751-3fef09eaf6cf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003247248s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-649359 "pgrep -a kubelet"
I0211 03:21:45.819895   19645 config.go:182] Loaded profile config "calico-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-649359 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zcswc" [246fcbb6-e643-410c-907f-a673888d6a34] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zcswc" [246fcbb6-e643-410c-907f-a673888d6a34] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003949694s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-649359 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m1.50823081s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pfr92" [2458b7a6-2daf-43d0-8a49-9955ed41ab62] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0211 03:22:23.754687   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/functional-454298/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pfr92" [2458b7a6-2daf-43d0-8a49-9955ed41ab62] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.005044538s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-pfr92" [2458b7a6-2daf-43d0-8a49-9955ed41ab62] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003586063s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-697681 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-697681 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-697681 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-697681 -n default-k8s-diff-port-697681
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-697681 -n default-k8s-diff-port-697681: exit status 2 (224.930935ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-697681 -n default-k8s-diff-port-697681
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-697681 -n default-k8s-diff-port-697681: exit status 2 (232.599861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-697681 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-697681 -n default-k8s-diff-port-697681
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-697681 -n default-k8s-diff-port-697681
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (73.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m13.9849845s)
--- PASS: TestNetworkPlugins/group/flannel/Start (73.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-649359 "pgrep -a kubelet"
I0211 03:22:44.949793   19645 config.go:182] Loaded profile config "custom-flannel-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-649359 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-tv8xg" [8b6d6c42-5998-4f52-8ec9-59afd76253d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-tv8xg" [8b6d6c42-5998-4f52-8ec9-59afd76253d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004410895s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-649359 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (54.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-649359 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (54.785850387s)
--- PASS: TestNetworkPlugins/group/bridge/Start (54.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-649359 "pgrep -a kubelet"
I0211 03:23:15.707265   19645 config.go:182] Loaded profile config "enable-default-cni-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-649359 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-r5dkd" [ec3946e0-655f-458e-9e38-b51c89c61e38] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-r5dkd" [ec3946e0-655f-458e-9e38-b51c89c61e38] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.002929446s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-649359 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9cwvz" [91e1b184-ac10-4185-abd2-021d5350d62e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004244752s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-649359 "pgrep -a kubelet"
E0211 03:24:02.903377   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
I0211 03:24:02.969965   19645 config.go:182] Loaded profile config "flannel-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-649359 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-kpdbz" [1155663f-4834-4ecb-8650-41cc2006a135] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0211 03:24:03.225080   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
E0211 03:24:03.866647   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-5d86dc444-kpdbz" [1155663f-4834-4ecb-8650-41cc2006a135] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003269719s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-649359 "pgrep -a kubelet"
I0211 03:24:08.022797   19645 config.go:182] Loaded profile config "bridge-649359": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-649359 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6d2tr" [ca6c5c94-45eb-451b-bfc3-c71c8eaee9a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-6d2tr" [ca6c5c94-45eb-451b-bfc3-c71c8eaee9a3] Running
E0211 03:24:12.831975   19645 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/auto-649359/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003889682s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-649359 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (20.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-649359 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-649359 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137812598s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0211 03:24:33.411810   19645 retry.go:31] will retry after 515.207852ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-649359 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-649359 exec deployment/netcat -- nslookup kubernetes.default: (5.137786183s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (20.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-649359 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (40/327)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.32.1/cached-images 0
15 TestDownloadOnly/v1.32.1/binaries 0
16 TestDownloadOnly/v1.32.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
39 TestAddons/parallel/Olm 0
46 TestAddons/parallel/AmdGpuDevicePlugin 0
50 TestDockerFlags 0
53 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
144 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
146 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
148 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
206 TestKicCustomNetwork 0
207 TestKicExistingNetwork 0
208 TestKicCustomSubnet 0
209 TestKicStaticIP 0
241 TestChangeNoneUser 0
244 TestScheduledStopWindows 0
246 TestSkaffold 0
248 TestInsufficientStorage 0
252 TestMissingContainerUpgrade 0
263 TestStartStop/group/disable-driver-mounts 0.14
267 TestNetworkPlugins/group/kubenet 2.99
279 TestNetworkPlugins/group/cilium 3.95
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-046133 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:480: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-575892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-575892
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-649359 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-649359

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-649359

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-649359

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-649359

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-649359

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-649359

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-649359

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-649359

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-649359

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-649359

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-649359

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-649359" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-649359" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 03:04:23 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.237:8443
name: cert-expiration-411526
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 03:04:31 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.91:8443
name: running-upgrade-378121
contexts:
- context:
cluster: cert-expiration-411526
extensions:
- extension:
last-update: Tue, 11 Feb 2025 03:04:23 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-411526
name: cert-expiration-411526
- context:
cluster: running-upgrade-378121
user: running-upgrade-378121
name: running-upgrade-378121
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-411526
user:
client-certificate: /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/cert-expiration-411526/client.crt
client-key: /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/cert-expiration-411526/client.key
- name: running-upgrade-378121
user:
client-certificate: /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/running-upgrade-378121/client.crt
client-key: /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/running-upgrade-378121/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-649359

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-649359"

                                                
                                                
----------------------- debugLogs end: kubenet-649359 [took: 2.820828402s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-649359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-649359
--- SKIP: TestNetworkPlugins/group/kubenet (2.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-649359 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-649359" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 03:04:23 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.72.237:8443
name: cert-expiration-411526
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20400-12456/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 11 Feb 2025 03:04:31 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.61.91:8443
name: running-upgrade-378121
contexts:
- context:
cluster: cert-expiration-411526
extensions:
- extension:
last-update: Tue, 11 Feb 2025 03:04:23 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: cert-expiration-411526
name: cert-expiration-411526
- context:
cluster: running-upgrade-378121
user: running-upgrade-378121
name: running-upgrade-378121
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-411526
user:
client-certificate: /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/cert-expiration-411526/client.crt
client-key: /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/cert-expiration-411526/client.key
- name: running-upgrade-378121
user:
client-certificate: /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/running-upgrade-378121/client.crt
client-key: /home/jenkins/minikube-integration/20400-12456/.minikube/profiles/running-upgrade-378121/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-649359

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-649359" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-649359"

                                                
                                                
----------------------- debugLogs end: cilium-649359 [took: 3.766946992s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-649359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-649359
--- SKIP: TestNetworkPlugins/group/cilium (3.95s)

                                                
                                    
Copied to clipboard